Feb 12 19:18:19.043642 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 12 19:18:19.043660 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Feb 12 18:07:00 -00 2024 Feb 12 19:18:19.043668 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Feb 12 19:18:19.043674 kernel: printk: bootconsole [pl11] enabled Feb 12 19:18:19.043679 kernel: efi: EFI v2.70 by EDK II Feb 12 19:18:19.043685 kernel: efi: ACPI 2.0=0x3fd89018 SMBIOS=0x3fd66000 SMBIOS 3.0=0x3fd64000 MEMATTR=0x3ef2e698 RNG=0x3fd89998 MEMRESERVE=0x37e73f98 Feb 12 19:18:19.043691 kernel: random: crng init done Feb 12 19:18:19.043697 kernel: ACPI: Early table checksum verification disabled Feb 12 19:18:19.043702 kernel: ACPI: RSDP 0x000000003FD89018 000024 (v02 VRTUAL) Feb 12 19:18:19.043707 kernel: ACPI: XSDT 0x000000003FD89F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:18:19.043713 kernel: ACPI: FACP 0x000000003FD89C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:18:19.043719 kernel: ACPI: DSDT 0x000000003EBD2018 01DEC0 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 12 19:18:19.043725 kernel: ACPI: DBG2 0x000000003FD89B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:18:19.043730 kernel: ACPI: GTDT 0x000000003FD89D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:18:19.043737 kernel: ACPI: OEM0 0x000000003FD89098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:18:19.043743 kernel: ACPI: SPCR 0x000000003FD89A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:18:19.043749 kernel: ACPI: APIC 0x000000003FD89818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:18:19.043756 kernel: ACPI: SRAT 0x000000003FD89198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:18:19.043761 kernel: ACPI: PPTT 0x000000003FD89418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Feb 12 19:18:19.043767 kernel: ACPI: BGRT 0x000000003FD89E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:18:19.043773 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Feb 12 19:18:19.043778 kernel: NUMA: Failed to initialise from firmware Feb 12 19:18:19.043784 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Feb 12 19:18:19.043790 kernel: NUMA: NODE_DATA [mem 0x1bf7f2900-0x1bf7f7fff] Feb 12 19:18:19.043795 kernel: Zone ranges: Feb 12 19:18:19.043801 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Feb 12 19:18:19.043806 kernel: DMA32 empty Feb 12 19:18:19.043813 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Feb 12 19:18:19.043819 kernel: Movable zone start for each node Feb 12 19:18:19.043825 kernel: Early memory node ranges Feb 12 19:18:19.043830 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Feb 12 19:18:19.043836 kernel: node 0: [mem 0x0000000000824000-0x000000003ec80fff] Feb 12 19:18:19.043841 kernel: node 0: [mem 0x000000003ec81000-0x000000003eca9fff] Feb 12 19:18:19.043847 kernel: node 0: [mem 0x000000003ecaa000-0x000000003fd29fff] Feb 12 19:18:19.043852 kernel: node 0: [mem 0x000000003fd2a000-0x000000003fd7dfff] Feb 12 19:18:19.043858 kernel: node 0: [mem 0x000000003fd7e000-0x000000003fd89fff] Feb 12 19:18:19.043863 kernel: node 0: [mem 0x000000003fd8a000-0x000000003fd8dfff] Feb 12 19:18:19.043868 kernel: node 0: [mem 0x000000003fd8e000-0x000000003fffffff] Feb 12 19:18:19.043874 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Feb 12 19:18:19.043881 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Feb 12 19:18:19.043889 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Feb 12 19:18:19.043895 kernel: psci: probing for conduit method from ACPI. Feb 12 19:18:19.043902 kernel: psci: PSCIv1.1 detected in firmware. Feb 12 19:18:19.043908 kernel: psci: Using standard PSCI v0.2 function IDs Feb 12 19:18:19.043914 kernel: psci: MIGRATE_INFO_TYPE not supported. Feb 12 19:18:19.043920 kernel: psci: SMC Calling Convention v1.4 Feb 12 19:18:19.043926 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Feb 12 19:18:19.043932 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Feb 12 19:18:19.043938 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 12 19:18:19.043944 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 12 19:18:19.043950 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 12 19:18:19.043956 kernel: Detected PIPT I-cache on CPU0 Feb 12 19:18:19.043962 kernel: CPU features: detected: GIC system register CPU interface Feb 12 19:18:19.043968 kernel: CPU features: detected: Hardware dirty bit management Feb 12 19:18:19.043974 kernel: CPU features: detected: Spectre-BHB Feb 12 19:18:19.043980 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 12 19:18:19.043987 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 12 19:18:19.043993 kernel: CPU features: detected: ARM erratum 1418040 Feb 12 19:18:19.043999 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Feb 12 19:18:19.044005 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Feb 12 19:18:19.044011 kernel: Policy zone: Normal Feb 12 19:18:19.044018 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=0a07ee1673be713cb46dc1305004c8854c4690dc8835a87e3bc71aa6c6a62e40 Feb 12 19:18:19.044025 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 19:18:19.044031 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 12 19:18:19.044037 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 12 19:18:19.044043 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 19:18:19.044050 kernel: software IO TLB: mapped [mem 0x000000003abd2000-0x000000003ebd2000] (64MB) Feb 12 19:18:19.044056 kernel: Memory: 3991936K/4194160K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 202224K reserved, 0K cma-reserved) Feb 12 19:18:19.044063 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 12 19:18:19.044068 kernel: trace event string verifier disabled Feb 12 19:18:19.044074 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 12 19:18:19.044081 kernel: rcu: RCU event tracing is enabled. Feb 12 19:18:19.044087 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 12 19:18:19.044093 kernel: Trampoline variant of Tasks RCU enabled. Feb 12 19:18:19.044099 kernel: Tracing variant of Tasks RCU enabled. Feb 12 19:18:19.044105 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 19:18:19.044111 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 12 19:18:19.044118 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 12 19:18:19.044124 kernel: GICv3: 960 SPIs implemented Feb 12 19:18:19.044130 kernel: GICv3: 0 Extended SPIs implemented Feb 12 19:18:19.044136 kernel: GICv3: Distributor has no Range Selector support Feb 12 19:18:19.044142 kernel: Root IRQ handler: gic_handle_irq Feb 12 19:18:19.044148 kernel: GICv3: 16 PPIs implemented Feb 12 19:18:19.044154 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Feb 12 19:18:19.044160 kernel: ITS: No ITS available, not enabling LPIs Feb 12 19:18:19.044166 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 12 19:18:19.044172 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 12 19:18:19.044178 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 12 19:18:19.044184 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 12 19:18:19.044192 kernel: Console: colour dummy device 80x25 Feb 12 19:18:19.044198 kernel: printk: console [tty1] enabled Feb 12 19:18:19.044204 kernel: ACPI: Core revision 20210730 Feb 12 19:18:19.044211 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 12 19:18:19.044217 kernel: pid_max: default: 32768 minimum: 301 Feb 12 19:18:19.044223 kernel: LSM: Security Framework initializing Feb 12 19:18:19.044230 kernel: SELinux: Initializing. Feb 12 19:18:19.044236 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 19:18:19.044243 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 19:18:19.044250 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Feb 12 19:18:19.044256 kernel: Hyper-V: Host Build 10.0.22477.1369-1-0 Feb 12 19:18:19.044263 kernel: rcu: Hierarchical SRCU implementation. Feb 12 19:18:19.044269 kernel: Remapping and enabling EFI services. Feb 12 19:18:19.044275 kernel: smp: Bringing up secondary CPUs ... Feb 12 19:18:19.044281 kernel: Detected PIPT I-cache on CPU1 Feb 12 19:18:19.044288 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Feb 12 19:18:19.044294 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 12 19:18:19.044301 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 12 19:18:19.044308 kernel: smp: Brought up 1 node, 2 CPUs Feb 12 19:18:19.044314 kernel: SMP: Total of 2 processors activated. Feb 12 19:18:19.044320 kernel: CPU features: detected: 32-bit EL0 Support Feb 12 19:18:19.044326 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Feb 12 19:18:19.044332 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 12 19:18:19.044339 kernel: CPU features: detected: CRC32 instructions Feb 12 19:18:19.044345 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 12 19:18:19.044351 kernel: CPU features: detected: LSE atomic instructions Feb 12 19:18:19.044357 kernel: CPU features: detected: Privileged Access Never Feb 12 19:18:19.044365 kernel: CPU: All CPU(s) started at EL1 Feb 12 19:18:19.044371 kernel: alternatives: patching kernel code Feb 12 19:18:19.044381 kernel: devtmpfs: initialized Feb 12 19:18:19.044389 kernel: KASLR enabled Feb 12 19:18:19.044395 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 19:18:19.044402 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 12 19:18:19.044408 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 19:18:19.044414 kernel: SMBIOS 3.1.0 present. Feb 12 19:18:19.044421 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023 Feb 12 19:18:19.044427 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 19:18:19.044435 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 12 19:18:19.044442 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 12 19:18:19.044449 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 12 19:18:19.044455 kernel: audit: initializing netlink subsys (disabled) Feb 12 19:18:19.044462 kernel: audit: type=2000 audit(0.097:1): state=initialized audit_enabled=0 res=1 Feb 12 19:18:19.049496 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 19:18:19.049527 kernel: cpuidle: using governor menu Feb 12 19:18:19.049541 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 12 19:18:19.049548 kernel: ASID allocator initialised with 32768 entries Feb 12 19:18:19.049555 kernel: ACPI: bus type PCI registered Feb 12 19:18:19.049561 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 19:18:19.049568 kernel: Serial: AMBA PL011 UART driver Feb 12 19:18:19.049575 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 12 19:18:19.049582 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 12 19:18:19.049588 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 19:18:19.049595 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 12 19:18:19.049603 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 19:18:19.049610 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 12 19:18:19.049617 kernel: ACPI: Added _OSI(Module Device) Feb 12 19:18:19.049623 kernel: ACPI: Added _OSI(Processor Device) Feb 12 19:18:19.049630 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 19:18:19.049636 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 19:18:19.049643 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 19:18:19.049649 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 19:18:19.049656 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 19:18:19.049664 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 12 19:18:19.049670 kernel: ACPI: Interpreter enabled Feb 12 19:18:19.049677 kernel: ACPI: Using GIC for interrupt routing Feb 12 19:18:19.049684 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Feb 12 19:18:19.049691 kernel: printk: console [ttyAMA0] enabled Feb 12 19:18:19.049697 kernel: printk: bootconsole [pl11] disabled Feb 12 19:18:19.049704 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Feb 12 19:18:19.049710 kernel: iommu: Default domain type: Translated Feb 12 19:18:19.049717 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 12 19:18:19.049725 kernel: vgaarb: loaded Feb 12 19:18:19.049731 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 19:18:19.049738 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 19:18:19.049744 kernel: PTP clock support registered Feb 12 19:18:19.049751 kernel: Registered efivars operations Feb 12 19:18:19.049757 kernel: No ACPI PMU IRQ for CPU0 Feb 12 19:18:19.049764 kernel: No ACPI PMU IRQ for CPU1 Feb 12 19:18:19.049771 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 12 19:18:19.049778 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 19:18:19.049785 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 19:18:19.049792 kernel: pnp: PnP ACPI init Feb 12 19:18:19.049798 kernel: pnp: PnP ACPI: found 0 devices Feb 12 19:18:19.049805 kernel: NET: Registered PF_INET protocol family Feb 12 19:18:19.049812 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 12 19:18:19.049819 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 12 19:18:19.049825 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 19:18:19.049832 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 12 19:18:19.049839 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 12 19:18:19.049847 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 12 19:18:19.049854 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 19:18:19.049861 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 19:18:19.049868 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 19:18:19.049875 kernel: PCI: CLS 0 bytes, default 64 Feb 12 19:18:19.049882 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Feb 12 19:18:19.049888 kernel: kvm [1]: HYP mode not available Feb 12 19:18:19.049895 kernel: Initialise system trusted keyrings Feb 12 19:18:19.049901 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 12 19:18:19.049909 kernel: Key type asymmetric registered Feb 12 19:18:19.049916 kernel: Asymmetric key parser 'x509' registered Feb 12 19:18:19.049922 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 19:18:19.049929 kernel: io scheduler mq-deadline registered Feb 12 19:18:19.049935 kernel: io scheduler kyber registered Feb 12 19:18:19.049942 kernel: io scheduler bfq registered Feb 12 19:18:19.049948 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 19:18:19.049955 kernel: thunder_xcv, ver 1.0 Feb 12 19:18:19.049961 kernel: thunder_bgx, ver 1.0 Feb 12 19:18:19.049969 kernel: nicpf, ver 1.0 Feb 12 19:18:19.049975 kernel: nicvf, ver 1.0 Feb 12 19:18:19.050113 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 12 19:18:19.050175 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-12T19:18:18 UTC (1707765498) Feb 12 19:18:19.050184 kernel: efifb: probing for efifb Feb 12 19:18:19.050191 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 12 19:18:19.050198 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 12 19:18:19.050204 kernel: efifb: scrolling: redraw Feb 12 19:18:19.050213 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 12 19:18:19.050219 kernel: Console: switching to colour frame buffer device 128x48 Feb 12 19:18:19.050226 kernel: fb0: EFI VGA frame buffer device Feb 12 19:18:19.050232 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Feb 12 19:18:19.050239 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 12 19:18:19.050246 kernel: NET: Registered PF_INET6 protocol family Feb 12 19:18:19.050252 kernel: Segment Routing with IPv6 Feb 12 19:18:19.050258 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 19:18:19.050265 kernel: NET: Registered PF_PACKET protocol family Feb 12 19:18:19.050273 kernel: Key type dns_resolver registered Feb 12 19:18:19.050279 kernel: registered taskstats version 1 Feb 12 19:18:19.050286 kernel: Loading compiled-in X.509 certificates Feb 12 19:18:19.050292 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: c8c3faa6fd8ae0112832fff0e3d0e58448a7eb6c' Feb 12 19:18:19.050299 kernel: Key type .fscrypt registered Feb 12 19:18:19.050306 kernel: Key type fscrypt-provisioning registered Feb 12 19:18:19.050313 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 19:18:19.050319 kernel: ima: Allocated hash algorithm: sha1 Feb 12 19:18:19.050326 kernel: ima: No architecture policies found Feb 12 19:18:19.050334 kernel: Freeing unused kernel memory: 34688K Feb 12 19:18:19.050340 kernel: Run /init as init process Feb 12 19:18:19.050362 kernel: with arguments: Feb 12 19:18:19.050371 kernel: /init Feb 12 19:18:19.050377 kernel: with environment: Feb 12 19:18:19.050383 kernel: HOME=/ Feb 12 19:18:19.050390 kernel: TERM=linux Feb 12 19:18:19.050396 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 19:18:19.050405 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:18:19.050417 systemd[1]: Detected virtualization microsoft. Feb 12 19:18:19.050424 systemd[1]: Detected architecture arm64. Feb 12 19:18:19.050431 systemd[1]: Running in initrd. Feb 12 19:18:19.050438 systemd[1]: No hostname configured, using default hostname. Feb 12 19:18:19.050445 systemd[1]: Hostname set to . Feb 12 19:18:19.050453 systemd[1]: Initializing machine ID from random generator. Feb 12 19:18:19.050460 systemd[1]: Queued start job for default target initrd.target. Feb 12 19:18:19.050468 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:18:19.050486 systemd[1]: Reached target cryptsetup.target. Feb 12 19:18:19.050493 systemd[1]: Reached target paths.target. Feb 12 19:18:19.050500 systemd[1]: Reached target slices.target. Feb 12 19:18:19.050506 systemd[1]: Reached target swap.target. Feb 12 19:18:19.050513 systemd[1]: Reached target timers.target. Feb 12 19:18:19.050521 systemd[1]: Listening on iscsid.socket. Feb 12 19:18:19.050528 systemd[1]: Listening on iscsiuio.socket. Feb 12 19:18:19.050537 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 19:18:19.050544 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 19:18:19.050551 systemd[1]: Listening on systemd-journald.socket. Feb 12 19:18:19.050558 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:18:19.050565 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:18:19.050573 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:18:19.050579 systemd[1]: Reached target sockets.target. Feb 12 19:18:19.050587 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:18:19.050593 systemd[1]: Finished network-cleanup.service. Feb 12 19:18:19.050602 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 19:18:19.050609 systemd[1]: Starting systemd-journald.service... Feb 12 19:18:19.050616 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:18:19.050623 systemd[1]: Starting systemd-resolved.service... Feb 12 19:18:19.050630 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 19:18:19.050641 systemd-journald[276]: Journal started Feb 12 19:18:19.050685 systemd-journald[276]: Runtime Journal (/run/log/journal/b56a523b573b49cd9f19cb30c2f4fe20) is 8.0M, max 78.6M, 70.6M free. Feb 12 19:18:19.030767 systemd-modules-load[277]: Inserted module 'overlay' Feb 12 19:18:19.079971 systemd[1]: Started systemd-journald.service. Feb 12 19:18:19.080028 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 19:18:19.084700 systemd-resolved[278]: Positive Trust Anchors: Feb 12 19:18:19.084716 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:18:19.120224 kernel: audit: type=1130 audit(1707765499.098:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:19.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:19.084745 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:18:19.209532 kernel: Bridge firewalling registered Feb 12 19:18:19.209554 kernel: audit: type=1130 audit(1707765499.167:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:19.209565 kernel: SCSI subsystem initialized Feb 12 19:18:19.209573 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 19:18:19.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:19.086794 systemd-resolved[278]: Defaulting to hostname 'linux'. Feb 12 19:18:19.244728 kernel: device-mapper: uevent: version 1.0.3 Feb 12 19:18:19.244750 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 19:18:19.244758 kernel: audit: type=1130 audit(1707765499.214:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:19.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:19.099079 systemd[1]: Started systemd-resolved.service. Feb 12 19:18:19.122459 systemd-modules-load[277]: Inserted module 'br_netfilter' Feb 12 19:18:19.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:19.168184 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:18:19.226395 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 19:18:19.247549 systemd-modules-load[277]: Inserted module 'dm_multipath' Feb 12 19:18:19.326118 kernel: audit: type=1130 audit(1707765499.254:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:19.326138 kernel: audit: type=1130 audit(1707765499.288:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:19.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:19.278439 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:18:19.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:19.289383 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 19:18:19.365534 kernel: audit: type=1130 audit(1707765499.320:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:19.320578 systemd[1]: Reached target nss-lookup.target. Feb 12 19:18:19.336375 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 19:18:19.361670 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:18:19.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:19.377552 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:18:19.448614 kernel: audit: type=1130 audit(1707765499.400:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:19.448635 kernel: audit: type=1130 audit(1707765499.420:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:19.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:19.388206 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:18:19.481120 kernel: audit: type=1130 audit(1707765499.453:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:19.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:19.401170 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:18:19.487259 dracut-cmdline[298]: dracut-dracut-053 Feb 12 19:18:19.431687 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 19:18:19.497262 dracut-cmdline[298]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=0a07ee1673be713cb46dc1305004c8854c4690dc8835a87e3bc71aa6c6a62e40 Feb 12 19:18:19.455036 systemd[1]: Starting dracut-cmdline.service... Feb 12 19:18:19.601494 kernel: Loading iSCSI transport class v2.0-870. Feb 12 19:18:19.610506 kernel: iscsi: registered transport (tcp) Feb 12 19:18:19.631021 kernel: iscsi: registered transport (qla4xxx) Feb 12 19:18:19.631060 kernel: QLogic iSCSI HBA Driver Feb 12 19:18:19.665622 systemd[1]: Finished dracut-cmdline.service. Feb 12 19:18:19.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:19.671014 systemd[1]: Starting dracut-pre-udev.service... Feb 12 19:18:19.726494 kernel: raid6: neonx8 gen() 13809 MB/s Feb 12 19:18:19.747484 kernel: raid6: neonx8 xor() 10821 MB/s Feb 12 19:18:19.767482 kernel: raid6: neonx4 gen() 13562 MB/s Feb 12 19:18:19.790482 kernel: raid6: neonx4 xor() 11145 MB/s Feb 12 19:18:19.811481 kernel: raid6: neonx2 gen() 13070 MB/s Feb 12 19:18:19.833482 kernel: raid6: neonx2 xor() 10240 MB/s Feb 12 19:18:19.854483 kernel: raid6: neonx1 gen() 10492 MB/s Feb 12 19:18:19.875481 kernel: raid6: neonx1 xor() 8764 MB/s Feb 12 19:18:19.897498 kernel: raid6: int64x8 gen() 6291 MB/s Feb 12 19:18:19.918482 kernel: raid6: int64x8 xor() 3550 MB/s Feb 12 19:18:19.938481 kernel: raid6: int64x4 gen() 7246 MB/s Feb 12 19:18:19.960481 kernel: raid6: int64x4 xor() 3856 MB/s Feb 12 19:18:19.981481 kernel: raid6: int64x2 gen() 6155 MB/s Feb 12 19:18:20.002482 kernel: raid6: int64x2 xor() 3315 MB/s Feb 12 19:18:20.024483 kernel: raid6: int64x1 gen() 5044 MB/s Feb 12 19:18:20.050141 kernel: raid6: int64x1 xor() 2646 MB/s Feb 12 19:18:20.050151 kernel: raid6: using algorithm neonx8 gen() 13809 MB/s Feb 12 19:18:20.050159 kernel: raid6: .... xor() 10821 MB/s, rmw enabled Feb 12 19:18:20.055556 kernel: raid6: using neon recovery algorithm Feb 12 19:18:20.075484 kernel: xor: measuring software checksum speed Feb 12 19:18:20.084726 kernel: 8regs : 17289 MB/sec Feb 12 19:18:20.084736 kernel: 32regs : 20755 MB/sec Feb 12 19:18:20.089671 kernel: arm64_neon : 27968 MB/sec Feb 12 19:18:20.089681 kernel: xor: using function: arm64_neon (27968 MB/sec) Feb 12 19:18:20.152488 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 12 19:18:20.163080 systemd[1]: Finished dracut-pre-udev.service. Feb 12 19:18:20.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:20.172000 audit: BPF prog-id=7 op=LOAD Feb 12 19:18:20.172000 audit: BPF prog-id=8 op=LOAD Feb 12 19:18:20.173360 systemd[1]: Starting systemd-udevd.service... Feb 12 19:18:20.192515 systemd-udevd[474]: Using default interface naming scheme 'v252'. Feb 12 19:18:20.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:20.200263 systemd[1]: Started systemd-udevd.service. Feb 12 19:18:20.212259 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 19:18:20.232194 dracut-pre-trigger[495]: rd.md=0: removing MD RAID activation Feb 12 19:18:20.260133 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 19:18:20.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:20.266176 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:18:20.304733 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:18:20.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:20.369540 kernel: hv_vmbus: Vmbus version:5.3 Feb 12 19:18:20.391718 kernel: hv_vmbus: registering driver hid_hyperv Feb 12 19:18:20.391774 kernel: hv_vmbus: registering driver hv_netvsc Feb 12 19:18:20.391784 kernel: hv_vmbus: registering driver hv_storvsc Feb 12 19:18:20.391793 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 12 19:18:20.409093 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 12 19:18:20.418164 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 12 19:18:20.432498 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 12 19:18:20.432668 kernel: scsi host0: storvsc_host_t Feb 12 19:18:20.432756 kernel: scsi host1: storvsc_host_t Feb 12 19:18:20.441403 kernel: scsi 1:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 12 19:18:20.448532 kernel: scsi 1:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 12 19:18:20.470382 kernel: sr 1:0:0:2: [sr0] scsi-1 drive Feb 12 19:18:20.470664 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 12 19:18:20.471494 kernel: sd 1:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 12 19:18:20.477487 kernel: sr 1:0:0:2: Attached scsi CD-ROM sr0 Feb 12 19:18:20.482261 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Feb 12 19:18:20.482418 kernel: sd 1:0:0:0: [sda] Write Protect is off Feb 12 19:18:20.486514 kernel: sd 1:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 12 19:18:20.486627 kernel: sd 1:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 12 19:18:20.499488 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 12 19:18:20.508495 kernel: hv_netvsc 002248b7-a957-0022-48b7-a957002248b7 eth0: VF slot 1 added Feb 12 19:18:20.508637 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Feb 12 19:18:20.525500 kernel: hv_vmbus: registering driver hv_pci Feb 12 19:18:20.536144 kernel: hv_pci 3f2dae52-a468-46e8-afc1-207d0d28d612: PCI VMBus probing: Using version 0x10004 Feb 12 19:18:20.548929 kernel: hv_pci 3f2dae52-a468-46e8-afc1-207d0d28d612: PCI host bridge to bus a468:00 Feb 12 19:18:20.549094 kernel: pci_bus a468:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Feb 12 19:18:20.555057 kernel: pci_bus a468:00: No busn resource found for root bus, will use [bus 00-ff] Feb 12 19:18:20.565878 kernel: pci a468:00:02.0: [15b3:1018] type 00 class 0x020000 Feb 12 19:18:20.576659 kernel: pci a468:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 12 19:18:20.599612 kernel: pci a468:00:02.0: enabling Extended Tags Feb 12 19:18:20.627161 kernel: pci a468:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at a468:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Feb 12 19:18:20.627345 kernel: pci_bus a468:00: busn_res: [bus 00-ff] end is updated to 00 Feb 12 19:18:20.633938 kernel: pci a468:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 12 19:18:20.674493 kernel: mlx5_core a468:00:02.0: firmware version: 16.30.1284 Feb 12 19:18:20.830659 kernel: mlx5_core a468:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Feb 12 19:18:20.889508 kernel: hv_netvsc 002248b7-a957-0022-48b7-a957002248b7 eth0: VF registering: eth1 Feb 12 19:18:20.889683 kernel: mlx5_core a468:00:02.0 eth1: joined to eth0 Feb 12 19:18:20.900495 kernel: mlx5_core a468:00:02.0 enP42088s1: renamed from eth1 Feb 12 19:18:20.979403 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 19:18:21.073557 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (540) Feb 12 19:18:21.087491 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:18:21.185560 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 19:18:21.192905 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 19:18:21.210301 systemd[1]: Starting disk-uuid.service... Feb 12 19:18:21.300623 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 19:18:22.236298 disk-uuid[597]: The operation has completed successfully. Feb 12 19:18:22.241169 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 12 19:18:22.294152 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 19:18:22.294246 systemd[1]: Finished disk-uuid.service. Feb 12 19:18:22.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:22.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:22.312751 systemd[1]: Starting verity-setup.service... Feb 12 19:18:22.354716 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 12 19:18:22.561679 systemd[1]: Found device dev-mapper-usr.device. Feb 12 19:18:22.571729 systemd[1]: Finished verity-setup.service. Feb 12 19:18:22.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:22.576807 systemd[1]: Mounting sysusr-usr.mount... Feb 12 19:18:22.635498 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 19:18:22.635989 systemd[1]: Mounted sysusr-usr.mount. Feb 12 19:18:22.640750 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 19:18:22.641518 systemd[1]: Starting ignition-setup.service... Feb 12 19:18:22.648767 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 19:18:22.688893 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 12 19:18:22.688948 kernel: BTRFS info (device sda6): using free space tree Feb 12 19:18:22.693553 kernel: BTRFS info (device sda6): has skinny extents Feb 12 19:18:22.750727 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 19:18:22.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:22.760000 audit: BPF prog-id=9 op=LOAD Feb 12 19:18:22.761865 systemd[1]: Starting systemd-networkd.service... Feb 12 19:18:22.787850 systemd-networkd[842]: lo: Link UP Feb 12 19:18:22.787861 systemd-networkd[842]: lo: Gained carrier Feb 12 19:18:22.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:22.788233 systemd-networkd[842]: Enumeration completed Feb 12 19:18:22.791383 systemd[1]: Started systemd-networkd.service. Feb 12 19:18:22.795964 systemd[1]: Reached target network.target. Feb 12 19:18:22.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:22.800731 systemd-networkd[842]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:18:22.834703 iscsid[854]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:18:22.834703 iscsid[854]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 12 19:18:22.834703 iscsid[854]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 19:18:22.834703 iscsid[854]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 19:18:22.834703 iscsid[854]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 19:18:22.834703 iscsid[854]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:18:22.834703 iscsid[854]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 19:18:22.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:22.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:22.810221 systemd[1]: Starting iscsiuio.service... Feb 12 19:18:22.819181 systemd[1]: Started iscsiuio.service. Feb 12 19:18:22.825434 systemd[1]: Starting iscsid.service... Feb 12 19:18:22.838559 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 19:18:22.838981 systemd[1]: Started iscsid.service. Feb 12 19:18:22.850963 systemd[1]: Starting dracut-initqueue.service... Feb 12 19:18:22.903642 systemd[1]: Finished dracut-initqueue.service. Feb 12 19:18:22.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:22.916109 systemd[1]: Reached target remote-fs-pre.target. Feb 12 19:18:22.929338 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:18:22.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:22.938528 systemd[1]: Reached target remote-fs.target. Feb 12 19:18:23.019976 kernel: kauditd_printk_skb: 17 callbacks suppressed Feb 12 19:18:23.019998 kernel: audit: type=1130 audit(1707765502.989:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:22.947734 systemd[1]: Starting dracut-pre-mount.service... Feb 12 19:18:22.965491 systemd[1]: Finished dracut-pre-mount.service. Feb 12 19:18:22.984414 systemd[1]: Finished ignition-setup.service. Feb 12 19:18:23.019669 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 19:18:23.051487 kernel: mlx5_core a468:00:02.0 enP42088s1: Link up Feb 12 19:18:23.094495 kernel: hv_netvsc 002248b7-a957-0022-48b7-a957002248b7 eth0: Data path switched to VF: enP42088s1 Feb 12 19:18:23.094766 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:18:23.101984 systemd-networkd[842]: enP42088s1: Link UP Feb 12 19:18:23.104580 systemd-networkd[842]: eth0: Link UP Feb 12 19:18:23.104769 systemd-networkd[842]: eth0: Gained carrier Feb 12 19:18:23.110663 systemd-networkd[842]: enP42088s1: Gained carrier Feb 12 19:18:23.127548 systemd-networkd[842]: eth0: DHCPv4 address 10.200.20.34/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 12 19:18:25.145615 systemd-networkd[842]: eth0: Gained IPv6LL Feb 12 19:18:26.551330 ignition[869]: Ignition 2.14.0 Feb 12 19:18:26.553524 ignition[869]: Stage: fetch-offline Feb 12 19:18:26.553604 ignition[869]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:18:26.553631 ignition[869]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:18:26.626512 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:18:26.626665 ignition[869]: parsed url from cmdline: "" Feb 12 19:18:26.626668 ignition[869]: no config URL provided Feb 12 19:18:26.626673 ignition[869]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 19:18:26.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:26.637440 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 19:18:26.671554 kernel: audit: type=1130 audit(1707765506.642:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:26.626681 ignition[869]: no config at "/usr/lib/ignition/user.ign" Feb 12 19:18:26.665573 systemd[1]: Starting ignition-fetch.service... Feb 12 19:18:26.626687 ignition[869]: failed to fetch config: resource requires networking Feb 12 19:18:26.626912 ignition[869]: Ignition finished successfully Feb 12 19:18:26.681403 ignition[875]: Ignition 2.14.0 Feb 12 19:18:26.681410 ignition[875]: Stage: fetch Feb 12 19:18:26.681543 ignition[875]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:18:26.681569 ignition[875]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:18:26.693142 ignition[875]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:18:26.693280 ignition[875]: parsed url from cmdline: "" Feb 12 19:18:26.693285 ignition[875]: no config URL provided Feb 12 19:18:26.693290 ignition[875]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 19:18:26.693298 ignition[875]: no config at "/usr/lib/ignition/user.ign" Feb 12 19:18:26.693329 ignition[875]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 12 19:18:26.716962 ignition[875]: GET result: OK Feb 12 19:18:26.717051 ignition[875]: config has been read from IMDS userdata Feb 12 19:18:26.717121 ignition[875]: parsing config with SHA512: 5feb7dd0ba1e5a749290baead2a731e3c8a915abd6398a496a24cbfab1073e26f33f7ffb05dbcbc88e238bc02c2c9851dc2ca33572fea80a09ed81d251dcefdb Feb 12 19:18:26.778647 unknown[875]: fetched base config from "system" Feb 12 19:18:26.778666 unknown[875]: fetched base config from "system" Feb 12 19:18:26.779436 ignition[875]: fetch: fetch complete Feb 12 19:18:26.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:26.778673 unknown[875]: fetched user config from "azure" Feb 12 19:18:26.821462 kernel: audit: type=1130 audit(1707765506.792:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:26.779441 ignition[875]: fetch: fetch passed Feb 12 19:18:26.787920 systemd[1]: Finished ignition-fetch.service. Feb 12 19:18:26.779507 ignition[875]: Ignition finished successfully Feb 12 19:18:26.813202 systemd[1]: Starting ignition-kargs.service... Feb 12 19:18:26.858583 kernel: audit: type=1130 audit(1707765506.838:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:26.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:26.824683 ignition[881]: Ignition 2.14.0 Feb 12 19:18:26.834069 systemd[1]: Finished ignition-kargs.service. Feb 12 19:18:26.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:26.824689 ignition[881]: Stage: kargs Feb 12 19:18:26.899146 kernel: audit: type=1130 audit(1707765506.870:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:26.839485 systemd[1]: Starting ignition-disks.service... Feb 12 19:18:26.824825 ignition[881]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:18:26.866097 systemd[1]: Finished ignition-disks.service. Feb 12 19:18:26.824844 ignition[881]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:18:26.870936 systemd[1]: Reached target initrd-root-device.target. Feb 12 19:18:26.827645 ignition[881]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:18:26.896608 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:18:26.830729 ignition[881]: kargs: kargs passed Feb 12 19:18:26.903641 systemd[1]: Reached target local-fs.target. Feb 12 19:18:26.830779 ignition[881]: Ignition finished successfully Feb 12 19:18:26.911705 systemd[1]: Reached target sysinit.target. Feb 12 19:18:26.849283 ignition[887]: Ignition 2.14.0 Feb 12 19:18:26.921421 systemd[1]: Reached target basic.target. Feb 12 19:18:26.849290 ignition[887]: Stage: disks Feb 12 19:18:26.933746 systemd[1]: Starting systemd-fsck-root.service... Feb 12 19:18:26.849409 ignition[887]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:18:26.849428 ignition[887]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:18:26.852267 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:18:26.863915 ignition[887]: disks: disks passed Feb 12 19:18:26.863979 ignition[887]: Ignition finished successfully Feb 12 19:18:27.044284 systemd-fsck[895]: ROOT: clean, 602/7326000 files, 481070/7359488 blocks Feb 12 19:18:27.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:27.050913 systemd[1]: Finished systemd-fsck-root.service. Feb 12 19:18:27.079912 kernel: audit: type=1130 audit(1707765507.055:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:27.061762 systemd[1]: Mounting sysroot.mount... Feb 12 19:18:27.093491 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 19:18:27.093931 systemd[1]: Mounted sysroot.mount. Feb 12 19:18:27.097484 systemd[1]: Reached target initrd-root-fs.target. Feb 12 19:18:27.143113 systemd[1]: Mounting sysroot-usr.mount... Feb 12 19:18:27.147760 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 12 19:18:27.159662 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 19:18:27.159707 systemd[1]: Reached target ignition-diskful.target. Feb 12 19:18:27.174229 systemd[1]: Mounted sysroot-usr.mount. Feb 12 19:18:27.217857 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 19:18:27.222790 systemd[1]: Starting initrd-setup-root.service... Feb 12 19:18:27.244513 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (906) Feb 12 19:18:27.244566 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 12 19:18:27.256854 kernel: BTRFS info (device sda6): using free space tree Feb 12 19:18:27.264368 kernel: BTRFS info (device sda6): has skinny extents Feb 12 19:18:27.268578 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 19:18:27.278468 initrd-setup-root[911]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 19:18:27.299355 initrd-setup-root[937]: cut: /sysroot/etc/group: No such file or directory Feb 12 19:18:27.308851 initrd-setup-root[945]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 19:18:27.318273 initrd-setup-root[953]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 19:18:27.885245 systemd[1]: Finished initrd-setup-root.service. Feb 12 19:18:27.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:27.912241 systemd[1]: Starting ignition-mount.service... Feb 12 19:18:27.924888 kernel: audit: type=1130 audit(1707765507.890:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:27.922641 systemd[1]: Starting sysroot-boot.service... Feb 12 19:18:27.929989 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 12 19:18:27.930110 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 12 19:18:27.949227 ignition[971]: INFO : Ignition 2.14.0 Feb 12 19:18:27.949227 ignition[971]: INFO : Stage: mount Feb 12 19:18:27.960029 ignition[971]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:18:27.960029 ignition[971]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:18:27.960029 ignition[971]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:18:27.960029 ignition[971]: INFO : mount: mount passed Feb 12 19:18:27.960029 ignition[971]: INFO : Ignition finished successfully Feb 12 19:18:28.046339 kernel: audit: type=1130 audit(1707765507.972:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:28.046364 kernel: audit: type=1130 audit(1707765508.018:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:27.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:28.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:27.966983 systemd[1]: Finished ignition-mount.service. Feb 12 19:18:28.013593 systemd[1]: Finished sysroot-boot.service. Feb 12 19:18:28.665641 coreos-metadata[905]: Feb 12 19:18:28.665 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 12 19:18:28.676172 coreos-metadata[905]: Feb 12 19:18:28.676 INFO Fetch successful Feb 12 19:18:28.712776 coreos-metadata[905]: Feb 12 19:18:28.712 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 12 19:18:28.729656 coreos-metadata[905]: Feb 12 19:18:28.729 INFO Fetch successful Feb 12 19:18:28.736528 coreos-metadata[905]: Feb 12 19:18:28.736 INFO wrote hostname ci-3510.3.2-a-7e4be4023b to /sysroot/etc/hostname Feb 12 19:18:28.746738 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 12 19:18:28.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:28.753137 systemd[1]: Starting ignition-files.service... Feb 12 19:18:28.788224 kernel: audit: type=1130 audit(1707765508.752:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:28.787763 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 19:18:28.810493 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (984) Feb 12 19:18:28.826289 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 12 19:18:28.826312 kernel: BTRFS info (device sda6): using free space tree Feb 12 19:18:28.830927 kernel: BTRFS info (device sda6): has skinny extents Feb 12 19:18:28.835696 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 19:18:28.852764 ignition[1003]: INFO : Ignition 2.14.0 Feb 12 19:18:28.856835 ignition[1003]: INFO : Stage: files Feb 12 19:18:28.860825 ignition[1003]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:18:28.860825 ignition[1003]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:18:28.883373 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:18:28.883373 ignition[1003]: DEBUG : files: compiled without relabeling support, skipping Feb 12 19:18:28.883373 ignition[1003]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 19:18:28.883373 ignition[1003]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 19:18:28.972903 ignition[1003]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 19:18:28.981694 ignition[1003]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 19:18:28.996745 unknown[1003]: wrote ssh authorized keys file for user: core Feb 12 19:18:29.002132 ignition[1003]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 19:18:29.002132 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 12 19:18:29.002132 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 12 19:18:29.208419 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 12 19:18:29.425844 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 12 19:18:29.425844 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 12 19:18:29.450426 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-arm64.tar.gz: attempt #1 Feb 12 19:18:29.845945 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 12 19:18:29.988260 ignition[1003]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4c7e4541123cbd6f1d6fec1f827395cd58d65716c0998de790f965485738b6d6257c0dc46fd7f66403166c299f6d5bf9ff30b6e1ff9afbb071f17005e834518c Feb 12 19:18:30.006312 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 12 19:18:30.006312 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 12 19:18:30.006312 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 12 19:18:30.006312 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 12 19:18:30.006312 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Feb 12 19:18:30.366673 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 12 19:18:30.568937 ignition[1003]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Feb 12 19:18:30.586276 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 12 19:18:30.586276 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 12 19:18:30.586276 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 12 19:18:30.859216 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 12 19:18:30.940180 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 12 19:18:30.951157 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:18:30.951157 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubeadm: attempt #1 Feb 12 19:18:31.117244 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 12 19:18:31.395136 ignition[1003]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 46c9f489062bdb84574703f7339d140d7e42c9c71b367cd860071108a3c1d38fabda2ef69f9c0ff88f7c80e88d38f96ab2248d4c9a6c9c60b0a4c20fd640d0db Feb 12 19:18:31.414648 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:18:31.414648 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:18:31.414648 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubelet: attempt #1 Feb 12 19:18:31.475621 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 12 19:18:32.195569 ignition[1003]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 0e4ee1f23bf768c49d09beb13a6b5fad6efc8e3e685e7c5610188763e3af55923fb46158b5e76973a0f9a055f9b30d525b467c53415f965536adc2f04d9cf18d Feb 12 19:18:32.213995 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:18:32.213995 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:18:32.213995 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:18:32.213995 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/bin/kubectl" Feb 12 19:18:32.213995 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubectl: attempt #1 Feb 12 19:18:32.283436 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 12 19:18:32.549981 ignition[1003]: DEBUG : files: createFilesystemsFiles: createFiles: op(b): file matches expected sum of: 3672fda0beebbbd636a2088f427463cbad32683ea4fbb1df61650552e63846b6a47db803ccb70c3db0a8f24746a23a5632bdc15a3fb78f4f7d833e7f86763c2a Feb 12 19:18:32.568563 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 12 19:18:32.568563 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/install.sh" Feb 12 19:18:32.568563 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 19:18:32.568563 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 12 19:18:32.568563 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 12 19:18:32.568563 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 19:18:32.568563 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 19:18:32.568563 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 19:18:32.568563 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 19:18:32.718243 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1003) Feb 12 19:18:32.718266 kernel: audit: type=1130 audit(1707765512.684:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:32.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:32.718321 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:18:32.718321 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:18:32.718321 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 12 19:18:32.718321 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(11): oem config not found in "/usr/share/oem", looking on oem partition Feb 12 19:18:32.718321 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1799290320" Feb 12 19:18:32.718321 ignition[1003]: CRITICAL : files: createFilesystemsFiles: createFiles: op(11): op(12): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1799290320": device or resource busy Feb 12 19:18:32.718321 ignition[1003]: ERROR : files: createFilesystemsFiles: createFiles: op(11): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1799290320", trying btrfs: device or resource busy Feb 12 19:18:32.718321 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(13): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1799290320" Feb 12 19:18:32.718321 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(13): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1799290320" Feb 12 19:18:32.718321 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(14): [started] unmounting "/mnt/oem1799290320" Feb 12 19:18:32.718321 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(14): [finished] unmounting "/mnt/oem1799290320" Feb 12 19:18:32.718321 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 12 19:18:32.718321 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(15): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 12 19:18:32.718321 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(15): oem config not found in "/usr/share/oem", looking on oem partition Feb 12 19:18:32.718321 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(15): op(16): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem917293749" Feb 12 19:18:32.948331 kernel: audit: type=1130 audit(1707765512.761:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:32.948360 kernel: audit: type=1131 audit(1707765512.791:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:32.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:32.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:32.640710 systemd[1]: mnt-oem1799290320.mount: Deactivated successfully. Feb 12 19:18:32.954223 ignition[1003]: CRITICAL : files: createFilesystemsFiles: createFiles: op(15): op(16): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem917293749": device or resource busy Feb 12 19:18:32.954223 ignition[1003]: ERROR : files: createFilesystemsFiles: createFiles: op(15): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem917293749", trying btrfs: device or resource busy Feb 12 19:18:32.954223 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(15): op(17): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem917293749" Feb 12 19:18:32.954223 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(15): op(17): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem917293749" Feb 12 19:18:32.954223 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(15): op(18): [started] unmounting "/mnt/oem917293749" Feb 12 19:18:32.954223 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(15): op(18): [finished] unmounting "/mnt/oem917293749" Feb 12 19:18:32.954223 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(15): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 12 19:18:32.954223 ignition[1003]: INFO : files: op(19): [started] processing unit "waagent.service" Feb 12 19:18:32.954223 ignition[1003]: INFO : files: op(19): [finished] processing unit "waagent.service" Feb 12 19:18:32.954223 ignition[1003]: INFO : files: op(1a): [started] processing unit "nvidia.service" Feb 12 19:18:32.954223 ignition[1003]: INFO : files: op(1a): [finished] processing unit "nvidia.service" Feb 12 19:18:32.954223 ignition[1003]: INFO : files: op(1b): [started] processing unit "prepare-critools.service" Feb 12 19:18:32.954223 ignition[1003]: INFO : files: op(1b): op(1c): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:18:32.954223 ignition[1003]: INFO : files: op(1b): op(1c): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:18:32.954223 ignition[1003]: INFO : files: op(1b): [finished] processing unit "prepare-critools.service" Feb 12 19:18:32.954223 ignition[1003]: INFO : files: op(1d): [started] processing unit "prepare-helm.service" Feb 12 19:18:32.954223 ignition[1003]: INFO : files: op(1d): op(1e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 19:18:32.954223 ignition[1003]: INFO : files: op(1d): op(1e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 19:18:33.254197 kernel: audit: type=1130 audit(1707765513.044:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:33.254225 kernel: audit: type=1130 audit(1707765513.115:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:33.254235 kernel: audit: type=1131 audit(1707765513.141:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:33.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:33.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:33.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:32.669516 systemd[1]: Finished ignition-files.service. Feb 12 19:18:33.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:33.286455 ignition[1003]: INFO : files: op(1d): [finished] processing unit "prepare-helm.service" Feb 12 19:18:33.286455 ignition[1003]: INFO : files: op(1f): [started] processing unit "containerd.service" Feb 12 19:18:33.286455 ignition[1003]: INFO : files: op(1f): op(20): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 12 19:18:33.286455 ignition[1003]: INFO : files: op(1f): op(20): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 12 19:18:33.286455 ignition[1003]: INFO : files: op(1f): [finished] processing unit "containerd.service" Feb 12 19:18:33.286455 ignition[1003]: INFO : files: op(21): [started] processing unit "prepare-cni-plugins.service" Feb 12 19:18:33.286455 ignition[1003]: INFO : files: op(21): op(22): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:18:33.286455 ignition[1003]: INFO : files: op(21): op(22): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:18:33.286455 ignition[1003]: INFO : files: op(21): [finished] processing unit "prepare-cni-plugins.service" Feb 12 19:18:33.286455 ignition[1003]: INFO : files: op(23): [started] setting preset to enabled for "prepare-helm.service" Feb 12 19:18:33.286455 ignition[1003]: INFO : files: op(23): [finished] setting preset to enabled for "prepare-helm.service" Feb 12 19:18:33.286455 ignition[1003]: INFO : files: op(24): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:18:33.286455 ignition[1003]: INFO : files: op(24): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:18:33.286455 ignition[1003]: INFO : files: op(25): [started] setting preset to enabled for "waagent.service" Feb 12 19:18:33.286455 ignition[1003]: INFO : files: op(25): [finished] setting preset to enabled for "waagent.service" Feb 12 19:18:33.286455 ignition[1003]: INFO : files: op(26): [started] setting preset to enabled for "nvidia.service" Feb 12 19:18:33.286455 ignition[1003]: INFO : files: op(26): [finished] setting preset to enabled for "nvidia.service" Feb 12 19:18:33.286455 ignition[1003]: INFO : files: op(27): [started] setting preset to enabled for "prepare-critools.service" Feb 12 19:18:33.286455 ignition[1003]: INFO : files: op(27): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 19:18:33.286455 ignition[1003]: INFO : files: createResultFile: createFiles: op(28): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:18:33.618531 kernel: audit: type=1130 audit(1707765513.259:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:33.618558 kernel: audit: type=1131 audit(1707765513.376:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:33.618574 kernel: audit: type=1131 audit(1707765513.588:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:33.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:33.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:33.618652 initrd-setup-root-after-ignition[1028]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 19:18:32.687742 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 19:18:33.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:33.663033 ignition[1003]: INFO : files: createResultFile: createFiles: op(28): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:18:33.663033 ignition[1003]: INFO : files: files passed Feb 12 19:18:33.663033 ignition[1003]: INFO : Ignition finished successfully Feb 12 19:18:33.740905 kernel: audit: type=1131 audit(1707765513.637:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:33.740932 kernel: audit: type=1131 audit(1707765513.668:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:33.740942 kernel: audit: type=1131 audit(1707765513.704:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:33.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:33.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:32.717391 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 19:18:33.771021 kernel: audit: type=1131 audit(1707765513.715:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:33.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:32.722634 systemd[1]: Starting ignition-quench.service... Feb 12 19:18:32.744914 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 19:18:32.745041 systemd[1]: Finished ignition-quench.service. Feb 12 19:18:33.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:33.033844 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 19:18:33.044820 systemd[1]: Reached target ignition-complete.target. Feb 12 19:18:33.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:33.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:33.838950 ignition[1041]: INFO : Ignition 2.14.0 Feb 12 19:18:33.838950 ignition[1041]: INFO : Stage: umount Feb 12 19:18:33.838950 ignition[1041]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:18:33.838950 ignition[1041]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:18:33.838950 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:18:33.838950 ignition[1041]: INFO : umount: umount passed Feb 12 19:18:33.838950 ignition[1041]: INFO : Ignition finished successfully Feb 12 19:18:33.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:33.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:33.849000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:33.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:33.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:33.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:33.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:33.071684 systemd[1]: Starting initrd-parse-etc.service... Feb 12 19:18:33.097500 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 19:18:33.097737 systemd[1]: Finished initrd-parse-etc.service. Feb 12 19:18:33.142410 systemd[1]: Reached target initrd-fs.target. Feb 12 19:18:33.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:33.173290 systemd[1]: Reached target initrd.target. Feb 12 19:18:33.191405 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 19:18:33.203673 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 19:18:33.253929 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 19:18:33.297131 systemd[1]: Starting initrd-cleanup.service... Feb 12 19:18:33.318614 systemd[1]: Stopped target nss-lookup.target. Feb 12 19:18:33.330523 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 19:18:33.351314 systemd[1]: Stopped target timers.target. Feb 12 19:18:33.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:33.363454 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 19:18:33.363540 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 19:18:33.410768 systemd[1]: Stopped target initrd.target. Feb 12 19:18:34.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:33.429436 systemd[1]: Stopped target basic.target. Feb 12 19:18:34.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:33.443859 systemd[1]: Stopped target ignition-complete.target. Feb 12 19:18:34.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:34.045000 audit: BPF prog-id=6 op=UNLOAD Feb 12 19:18:33.457957 systemd[1]: Stopped target ignition-diskful.target. Feb 12 19:18:33.471720 systemd[1]: Stopped target initrd-root-device.target. Feb 12 19:18:34.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:33.487588 systemd[1]: Stopped target remote-fs.target. Feb 12 19:18:34.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:33.502189 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 19:18:34.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:33.516037 systemd[1]: Stopped target sysinit.target. Feb 12 19:18:34.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:33.532272 systemd[1]: Stopped target local-fs.target. Feb 12 19:18:33.546396 systemd[1]: Stopped target local-fs-pre.target. Feb 12 19:18:34.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:33.559863 systemd[1]: Stopped target swap.target. Feb 12 19:18:33.573771 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 19:18:33.573842 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 19:18:34.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:33.588535 systemd[1]: Stopped target cryptsetup.target. Feb 12 19:18:34.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:33.623518 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 19:18:34.163965 kernel: hv_netvsc 002248b7-a957-0022-48b7-a957002248b7 eth0: Data path switched from VF: enP42088s1 Feb 12 19:18:34.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:33.623573 systemd[1]: Stopped dracut-initqueue.service. Feb 12 19:18:33.637720 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 19:18:33.637759 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 19:18:33.669160 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 19:18:33.669212 systemd[1]: Stopped ignition-files.service. Feb 12 19:18:34.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:33.705139 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 12 19:18:34.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:34.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:33.705184 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 12 19:18:33.733253 systemd[1]: Stopping ignition-mount.service... Feb 12 19:18:33.777621 systemd[1]: Stopping iscsiuio.service... Feb 12 19:18:33.788184 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 19:18:33.788279 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 19:18:33.803683 systemd[1]: Stopping sysroot-boot.service... Feb 12 19:18:33.816747 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 19:18:33.816825 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 19:18:33.822551 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 19:18:33.822603 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 19:18:34.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:33.834375 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 12 19:18:33.834515 systemd[1]: Stopped iscsiuio.service. Feb 12 19:18:33.844738 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 19:18:33.845105 systemd[1]: Finished initrd-cleanup.service. Feb 12 19:18:33.850245 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 19:18:33.850399 systemd[1]: Stopped ignition-mount.service. Feb 12 19:18:33.858961 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 19:18:33.859022 systemd[1]: Stopped ignition-disks.service. Feb 12 19:18:34.306000 audit: BPF prog-id=8 op=UNLOAD Feb 12 19:18:34.306000 audit: BPF prog-id=7 op=UNLOAD Feb 12 19:18:34.308000 audit: BPF prog-id=5 op=UNLOAD Feb 12 19:18:34.308000 audit: BPF prog-id=4 op=UNLOAD Feb 12 19:18:34.308000 audit: BPF prog-id=3 op=UNLOAD Feb 12 19:18:33.883770 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 19:18:33.883824 systemd[1]: Stopped ignition-kargs.service. Feb 12 19:18:33.894358 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 12 19:18:33.894401 systemd[1]: Stopped ignition-fetch.service. Feb 12 19:18:33.902989 systemd[1]: Stopped target network.target. Feb 12 19:18:34.337516 systemd-journald[276]: Received SIGTERM from PID 1 (systemd). Feb 12 19:18:34.337548 iscsid[854]: iscsid shutting down. Feb 12 19:18:33.921551 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 19:18:33.921617 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 19:18:33.930006 systemd[1]: Stopped target paths.target. Feb 12 19:18:33.937551 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 19:18:33.941505 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 19:18:33.953315 systemd[1]: Stopped target slices.target. Feb 12 19:18:33.961102 systemd[1]: Stopped target sockets.target. Feb 12 19:18:33.969008 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 19:18:33.969045 systemd[1]: Closed iscsid.socket. Feb 12 19:18:33.976297 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 19:18:33.976333 systemd[1]: Closed iscsiuio.socket. Feb 12 19:18:33.983764 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 19:18:33.983809 systemd[1]: Stopped ignition-setup.service. Feb 12 19:18:33.992677 systemd[1]: Stopping systemd-networkd.service... Feb 12 19:18:34.001163 systemd[1]: Stopping systemd-resolved.service... Feb 12 19:18:34.010668 systemd-networkd[842]: eth0: DHCPv6 lease lost Feb 12 19:18:34.341000 audit: BPF prog-id=9 op=UNLOAD Feb 12 19:18:34.012619 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 12 19:18:34.013168 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 19:18:34.013268 systemd[1]: Stopped sysroot-boot.service. Feb 12 19:18:34.020698 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 19:18:34.020815 systemd[1]: Stopped systemd-networkd.service. Feb 12 19:18:34.031799 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 19:18:34.031894 systemd[1]: Stopped systemd-resolved.service. Feb 12 19:18:34.041056 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 19:18:34.041094 systemd[1]: Closed systemd-networkd.socket. Feb 12 19:18:34.050091 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 19:18:34.050140 systemd[1]: Stopped initrd-setup-root.service. Feb 12 19:18:34.060140 systemd[1]: Stopping network-cleanup.service... Feb 12 19:18:34.068734 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 19:18:34.068801 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 19:18:34.073840 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:18:34.073894 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:18:34.085617 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 19:18:34.085664 systemd[1]: Stopped systemd-modules-load.service. Feb 12 19:18:34.091034 systemd[1]: Stopping systemd-udevd.service... Feb 12 19:18:34.099331 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 19:18:34.102871 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 19:18:34.103009 systemd[1]: Stopped systemd-udevd.service. Feb 12 19:18:34.107994 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 19:18:34.108036 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 19:18:34.116600 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 19:18:34.116639 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 19:18:34.124305 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 19:18:34.124352 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 19:18:34.133560 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 19:18:34.133604 systemd[1]: Stopped dracut-cmdline.service. Feb 12 19:18:34.141747 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 19:18:34.141786 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 19:18:34.163432 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 19:18:34.178789 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 19:18:34.178876 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 19:18:34.194090 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 19:18:34.194188 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 19:18:34.243920 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 19:18:34.244020 systemd[1]: Stopped network-cleanup.service. Feb 12 19:18:34.253513 systemd[1]: Reached target initrd-switch-root.target. Feb 12 19:18:34.264545 systemd[1]: Starting initrd-switch-root.service... Feb 12 19:18:34.304686 systemd[1]: Switching root. Feb 12 19:18:34.342623 systemd-journald[276]: Journal stopped Feb 12 19:18:48.372528 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 19:18:48.372550 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 19:18:48.372560 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 19:18:48.372571 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 19:18:48.372579 kernel: SELinux: policy capability open_perms=1 Feb 12 19:18:48.372586 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 19:18:48.372595 kernel: SELinux: policy capability always_check_network=0 Feb 12 19:18:48.372603 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 19:18:48.372611 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 19:18:48.372619 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 19:18:48.372628 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 19:18:48.372637 systemd[1]: Successfully loaded SELinux policy in 289.079ms. Feb 12 19:18:48.372647 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.051ms. Feb 12 19:18:48.372657 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:18:48.372668 systemd[1]: Detected virtualization microsoft. Feb 12 19:18:48.372677 systemd[1]: Detected architecture arm64. Feb 12 19:18:48.372685 systemd[1]: Detected first boot. Feb 12 19:18:48.372695 systemd[1]: Hostname set to . Feb 12 19:18:48.372703 systemd[1]: Initializing machine ID from random generator. Feb 12 19:18:48.372712 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 19:18:48.372722 kernel: kauditd_printk_skb: 36 callbacks suppressed Feb 12 19:18:48.372731 kernel: audit: type=1400 audit(1707765520.033:87): avc: denied { associate } for pid=1092 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 12 19:18:48.372743 kernel: audit: type=1300 audit(1707765520.033:87): arch=c00000b7 syscall=5 success=yes exit=0 a0=400014766c a1=40000c8af8 a2=40000cea00 a3=32 items=0 ppid=1075 pid=1092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:18:48.372753 kernel: audit: type=1327 audit(1707765520.033:87): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 19:18:48.372763 kernel: audit: type=1400 audit(1707765520.047:88): avc: denied { associate } for pid=1092 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 12 19:18:48.372772 kernel: audit: type=1300 audit(1707765520.047:88): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000147745 a2=1ed a3=0 items=2 ppid=1075 pid=1092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:18:48.372780 kernel: audit: type=1307 audit(1707765520.047:88): cwd="/" Feb 12 19:18:48.372791 kernel: audit: type=1302 audit(1707765520.047:88): item=0 name=(null) inode=2 dev=00:2a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:18:48.372800 kernel: audit: type=1302 audit(1707765520.047:88): item=1 name=(null) inode=3 dev=00:2a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:18:48.372809 kernel: audit: type=1327 audit(1707765520.047:88): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 19:18:48.372818 systemd[1]: Populated /etc with preset unit settings. Feb 12 19:18:48.372827 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:18:48.372836 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:18:48.372846 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:18:48.372857 systemd[1]: Queued start job for default target multi-user.target. Feb 12 19:18:48.372866 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 19:18:48.372875 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 19:18:48.372884 systemd[1]: Created slice system-getty.slice. Feb 12 19:18:48.372893 systemd[1]: Created slice system-modprobe.slice. Feb 12 19:18:48.372903 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 19:18:48.372914 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 19:18:48.372925 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 19:18:48.372935 systemd[1]: Created slice user.slice. Feb 12 19:18:48.372944 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:18:48.372953 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 19:18:48.372962 systemd[1]: Set up automount boot.automount. Feb 12 19:18:48.372971 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 19:18:48.372980 systemd[1]: Reached target integritysetup.target. Feb 12 19:18:48.372989 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:18:48.372998 systemd[1]: Reached target remote-fs.target. Feb 12 19:18:48.373009 systemd[1]: Reached target slices.target. Feb 12 19:18:48.373018 systemd[1]: Reached target swap.target. Feb 12 19:18:48.373027 systemd[1]: Reached target torcx.target. Feb 12 19:18:48.373036 systemd[1]: Reached target veritysetup.target. Feb 12 19:18:48.373045 systemd[1]: Listening on systemd-coredump.socket. Feb 12 19:18:48.373055 systemd[1]: Listening on systemd-initctl.socket. Feb 12 19:18:48.373064 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 19:18:48.373073 kernel: audit: type=1400 audit(1707765527.937:89): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:18:48.373083 kernel: audit: type=1335 audit(1707765527.937:90): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 12 19:18:48.373092 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 19:18:48.373102 systemd[1]: Listening on systemd-journald.socket. Feb 12 19:18:48.373111 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:18:48.373121 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:18:48.373130 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:18:48.373141 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 19:18:48.373151 systemd[1]: Mounting dev-hugepages.mount... Feb 12 19:18:48.373160 systemd[1]: Mounting dev-mqueue.mount... Feb 12 19:18:48.373170 systemd[1]: Mounting media.mount... Feb 12 19:18:48.373179 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 19:18:48.373188 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 19:18:48.373197 systemd[1]: Mounting tmp.mount... Feb 12 19:18:48.373208 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 19:18:48.373217 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 19:18:48.373226 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:18:48.373236 systemd[1]: Starting modprobe@configfs.service... Feb 12 19:18:48.373245 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 19:18:48.373254 systemd[1]: Starting modprobe@drm.service... Feb 12 19:18:48.373263 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 19:18:48.373272 systemd[1]: Starting modprobe@fuse.service... Feb 12 19:18:48.373281 systemd[1]: Starting modprobe@loop.service... Feb 12 19:18:48.373292 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 19:18:48.373302 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 12 19:18:48.373311 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 12 19:18:48.373322 systemd[1]: Starting systemd-journald.service... Feb 12 19:18:48.373332 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:18:48.373341 kernel: loop: module loaded Feb 12 19:18:48.373350 systemd[1]: Starting systemd-network-generator.service... Feb 12 19:18:48.373359 kernel: fuse: init (API version 7.34) Feb 12 19:18:48.373369 systemd[1]: Starting systemd-remount-fs.service... Feb 12 19:18:48.373379 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:18:48.373388 systemd[1]: Mounted dev-hugepages.mount. Feb 12 19:18:48.373397 systemd[1]: Mounted dev-mqueue.mount. Feb 12 19:18:48.373410 systemd-journald[1207]: Journal started Feb 12 19:18:48.373449 systemd-journald[1207]: Runtime Journal (/run/log/journal/3e9c73f4c0334506a52ea984c82f3594) is 8.0M, max 78.6M, 70.6M free. Feb 12 19:18:47.937000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:18:47.937000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 12 19:18:48.369000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 19:18:48.396995 kernel: audit: type=1305 audit(1707765528.369:91): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 19:18:48.397056 systemd[1]: Started systemd-journald.service. Feb 12 19:18:48.397075 kernel: audit: type=1300 audit(1707765528.369:91): arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffe9702970 a2=4000 a3=1 items=0 ppid=1 pid=1207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:18:48.369000 audit[1207]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffe9702970 a2=4000 a3=1 items=0 ppid=1 pid=1207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:18:48.427933 systemd[1]: Mounted media.mount. Feb 12 19:18:48.430014 kernel: audit: type=1327 audit(1707765528.369:91): proctitle="/usr/lib/systemd/systemd-journald" Feb 12 19:18:48.430065 kernel: audit: type=1130 audit(1707765528.412:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:48.369000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 19:18:48.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:48.464676 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 19:18:48.469340 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 19:18:48.474545 systemd[1]: Mounted tmp.mount. Feb 12 19:18:48.478972 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 19:18:48.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:48.489022 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:18:48.508295 kernel: audit: type=1130 audit(1707765528.483:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:48.509856 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 19:18:48.510172 systemd[1]: Finished modprobe@configfs.service. Feb 12 19:18:48.515658 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 19:18:48.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:48.522671 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 19:18:48.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:48.552292 kernel: audit: type=1130 audit(1707765528.509:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:48.552334 kernel: audit: type=1130 audit(1707765528.514:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:48.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:48.574683 kernel: audit: type=1131 audit(1707765528.514:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:48.575371 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 19:18:48.575693 systemd[1]: Finished modprobe@drm.service. Feb 12 19:18:48.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:48.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:48.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:48.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:48.581239 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 19:18:48.581463 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 19:18:48.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:48.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:48.587381 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 19:18:48.587626 systemd[1]: Finished modprobe@fuse.service. Feb 12 19:18:48.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:48.592000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:48.592786 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 19:18:48.593015 systemd[1]: Finished modprobe@loop.service. Feb 12 19:18:48.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:48.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:48.598115 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:18:48.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:48.603303 systemd[1]: Finished systemd-network-generator.service. Feb 12 19:18:48.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:48.609131 systemd[1]: Finished systemd-remount-fs.service. Feb 12 19:18:48.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:48.614202 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:18:48.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:48.619665 systemd[1]: Reached target network-pre.target. Feb 12 19:18:48.625389 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 19:18:48.632192 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 19:18:48.636969 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 19:18:48.657765 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 19:18:48.663887 systemd[1]: Starting systemd-journal-flush.service... Feb 12 19:18:48.668411 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 19:18:48.669735 systemd[1]: Starting systemd-random-seed.service... Feb 12 19:18:48.677004 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 19:18:48.678315 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:18:48.683684 systemd[1]: Starting systemd-sysusers.service... Feb 12 19:18:48.690233 systemd[1]: Starting systemd-udev-settle.service... Feb 12 19:18:48.696579 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 19:18:48.702566 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 19:18:48.710128 udevadm[1243]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 12 19:18:48.719551 systemd[1]: Finished systemd-random-seed.service. Feb 12 19:18:48.724922 systemd[1]: Reached target first-boot-complete.target. Feb 12 19:18:48.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:48.743991 systemd-journald[1207]: Time spent on flushing to /var/log/journal/3e9c73f4c0334506a52ea984c82f3594 is 13.292ms for 1072 entries. Feb 12 19:18:48.743991 systemd-journald[1207]: System Journal (/var/log/journal/3e9c73f4c0334506a52ea984c82f3594) is 8.0M, max 2.6G, 2.6G free. Feb 12 19:18:48.851787 systemd-journald[1207]: Received client request to flush runtime journal. Feb 12 19:18:48.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:48.776340 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:18:48.852934 systemd[1]: Finished systemd-journal-flush.service. Feb 12 19:18:48.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:49.402677 systemd[1]: Finished systemd-sysusers.service. Feb 12 19:18:49.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:49.409111 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:18:49.764612 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:18:49.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:50.266336 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 19:18:50.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:50.272565 systemd[1]: Starting systemd-udevd.service... Feb 12 19:18:50.290927 systemd-udevd[1254]: Using default interface naming scheme 'v252'. Feb 12 19:18:50.522934 systemd[1]: Started systemd-udevd.service. Feb 12 19:18:50.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:50.548780 systemd[1]: Starting systemd-networkd.service... Feb 12 19:18:50.568209 systemd[1]: Found device dev-ttyAMA0.device. Feb 12 19:18:50.631508 systemd[1]: Starting systemd-userdbd.service... Feb 12 19:18:50.630000 audit[1271]: AVC avc: denied { confidentiality } for pid=1271 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 19:18:50.665045 kernel: hv_vmbus: registering driver hv_balloon Feb 12 19:18:50.665163 kernel: mousedev: PS/2 mouse device common for all mice Feb 12 19:18:50.665193 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 12 19:18:50.665219 kernel: hv_vmbus: registering driver hyperv_fb Feb 12 19:18:50.665235 kernel: hv_balloon: Memory hot add disabled on ARM64 Feb 12 19:18:50.670524 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 12 19:18:50.670628 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 12 19:18:50.691918 kernel: Console: switching to colour dummy device 80x25 Feb 12 19:18:50.696571 kernel: Console: switching to colour frame buffer device 128x48 Feb 12 19:18:50.630000 audit[1271]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaae2623870 a1=aa2c a2=ffff949624b0 a3=aaaae257d010 items=12 ppid=1254 pid=1271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:18:50.630000 audit: CWD cwd="/" Feb 12 19:18:50.630000 audit: PATH item=0 name=(null) inode=6457 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:18:50.630000 audit: PATH item=1 name=(null) inode=10957 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:18:50.630000 audit: PATH item=2 name=(null) inode=10957 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:18:50.630000 audit: PATH item=3 name=(null) inode=10958 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:18:50.630000 audit: PATH item=4 name=(null) inode=10957 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:18:50.630000 audit: PATH item=5 name=(null) inode=10959 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:18:50.630000 audit: PATH item=6 name=(null) inode=10957 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:18:50.630000 audit: PATH item=7 name=(null) inode=10960 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:18:50.630000 audit: PATH item=8 name=(null) inode=10957 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:18:50.630000 audit: PATH item=9 name=(null) inode=10961 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:18:50.630000 audit: PATH item=10 name=(null) inode=10957 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:18:50.630000 audit: PATH item=11 name=(null) inode=10962 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:18:50.630000 audit: PROCTITLE proctitle="(udev-worker)" Feb 12 19:18:50.714640 kernel: hv_utils: Registering HyperV Utility Driver Feb 12 19:18:50.714755 kernel: hv_vmbus: registering driver hv_utils Feb 12 19:18:50.714776 kernel: hv_utils: Heartbeat IC version 3.0 Feb 12 19:18:50.714795 kernel: hv_utils: Shutdown IC version 3.2 Feb 12 19:18:50.714811 kernel: hv_utils: TimeSync IC version 4.0 Feb 12 19:18:50.567326 systemd[1]: Started systemd-userdbd.service. Feb 12 19:18:50.650268 systemd-journald[1207]: Time jumped backwards, rotating. Feb 12 19:18:50.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:50.815224 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1261) Feb 12 19:18:50.834042 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Feb 12 19:18:50.834413 systemd[1]: Finished systemd-udev-settle.service. Feb 12 19:18:50.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:50.845885 systemd[1]: Starting lvm2-activation-early.service... Feb 12 19:18:50.873618 systemd-networkd[1275]: lo: Link UP Feb 12 19:18:50.873888 systemd-networkd[1275]: lo: Gained carrier Feb 12 19:18:50.874421 systemd-networkd[1275]: Enumeration completed Feb 12 19:18:50.874608 systemd[1]: Started systemd-networkd.service. Feb 12 19:18:50.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:50.880592 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 19:18:50.908406 systemd-networkd[1275]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:18:50.957211 kernel: mlx5_core a468:00:02.0 enP42088s1: Link up Feb 12 19:18:50.983217 kernel: hv_netvsc 002248b7-a957-0022-48b7-a957002248b7 eth0: Data path switched to VF: enP42088s1 Feb 12 19:18:50.984374 systemd-networkd[1275]: enP42088s1: Link UP Feb 12 19:18:50.984682 systemd-networkd[1275]: eth0: Link UP Feb 12 19:18:50.984693 systemd-networkd[1275]: eth0: Gained carrier Feb 12 19:18:50.992718 systemd-networkd[1275]: enP42088s1: Gained carrier Feb 12 19:18:51.005303 systemd-networkd[1275]: eth0: DHCPv4 address 10.200.20.34/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 12 19:18:51.287923 lvm[1333]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:18:51.328180 systemd[1]: Finished lvm2-activation-early.service. Feb 12 19:18:51.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:51.333164 systemd[1]: Reached target cryptsetup.target. Feb 12 19:18:51.338963 systemd[1]: Starting lvm2-activation.service... Feb 12 19:18:51.342942 lvm[1336]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:18:51.368197 systemd[1]: Finished lvm2-activation.service. Feb 12 19:18:51.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:51.373213 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:18:51.378549 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 19:18:51.378580 systemd[1]: Reached target local-fs.target. Feb 12 19:18:51.383103 systemd[1]: Reached target machines.target. Feb 12 19:18:51.389016 systemd[1]: Starting ldconfig.service... Feb 12 19:18:51.393072 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 19:18:51.393150 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:18:51.394453 systemd[1]: Starting systemd-boot-update.service... Feb 12 19:18:51.399915 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 19:18:51.407050 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 19:18:51.411844 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:18:51.411905 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:18:51.413159 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 19:18:51.426467 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 19:18:51.443487 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 19:18:51.444908 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 19:18:51.516300 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1339 (bootctl) Feb 12 19:18:51.517625 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 19:18:52.146222 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 19:18:52.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:52.417462 systemd-fsck[1348]: fsck.fat 4.2 (2021-01-31) Feb 12 19:18:52.417462 systemd-fsck[1348]: /dev/sda1: 236 files, 113719/258078 clusters Feb 12 19:18:52.419503 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 19:18:52.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:52.428037 systemd[1]: Mounting boot.mount... Feb 12 19:18:52.480168 systemd[1]: Mounted boot.mount. Feb 12 19:18:52.491303 systemd[1]: Finished systemd-boot-update.service. Feb 12 19:18:52.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:52.649179 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 19:18:52.649865 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 19:18:52.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:52.845371 systemd-networkd[1275]: eth0: Gained IPv6LL Feb 12 19:18:52.850222 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 19:18:52.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:52.861208 kernel: kauditd_printk_skb: 46 callbacks suppressed Feb 12 19:18:52.861286 kernel: audit: type=1130 audit(1707765532.855:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:54.301861 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 19:18:54.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:54.312692 systemd[1]: Starting audit-rules.service... Feb 12 19:18:54.327094 kernel: audit: type=1130 audit(1707765534.305:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:54.328938 systemd[1]: Starting clean-ca-certificates.service... Feb 12 19:18:54.337555 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 19:18:54.344532 systemd[1]: Starting systemd-resolved.service... Feb 12 19:18:54.350491 systemd[1]: Starting systemd-timesyncd.service... Feb 12 19:18:54.356239 systemd[1]: Starting systemd-update-utmp.service... Feb 12 19:18:54.361040 systemd[1]: Finished clean-ca-certificates.service. Feb 12 19:18:54.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:54.372722 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 19:18:54.383742 kernel: audit: type=1130 audit(1707765534.364:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:54.405000 audit[1367]: SYSTEM_BOOT pid=1367 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 19:18:54.408955 systemd[1]: Finished systemd-update-utmp.service. Feb 12 19:18:54.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:54.451661 kernel: audit: type=1127 audit(1707765534.405:131): pid=1367 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 19:18:54.452342 kernel: audit: type=1130 audit(1707765534.427:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:54.499246 systemd[1]: Started systemd-timesyncd.service. Feb 12 19:18:54.504006 systemd[1]: Reached target time-set.target. Feb 12 19:18:54.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:54.528215 kernel: audit: type=1130 audit(1707765534.502:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:54.686315 systemd-resolved[1365]: Positive Trust Anchors: Feb 12 19:18:54.686329 systemd-resolved[1365]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:18:54.686357 systemd-resolved[1365]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:18:54.710771 systemd-resolved[1365]: Using system hostname 'ci-3510.3.2-a-7e4be4023b'. Feb 12 19:18:54.712155 systemd[1]: Started systemd-resolved.service. Feb 12 19:18:54.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:54.717690 systemd[1]: Reached target network.target. Feb 12 19:18:54.743976 kernel: audit: type=1130 audit(1707765534.716:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:54.744407 systemd[1]: Reached target network-online.target. Feb 12 19:18:54.749400 systemd[1]: Reached target nss-lookup.target. Feb 12 19:18:54.757252 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 19:18:54.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:54.785172 augenrules[1383]: No rules Feb 12 19:18:54.783000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 19:18:54.786580 systemd[1]: Finished audit-rules.service. Feb 12 19:18:54.797876 kernel: audit: type=1130 audit(1707765534.762:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:54.797995 kernel: audit: type=1305 audit(1707765534.783:136): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 19:18:54.783000 audit[1383]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe8450d00 a2=420 a3=0 items=0 ppid=1360 pid=1383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:18:54.827372 kernel: audit: type=1300 audit(1707765534.783:136): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe8450d00 a2=420 a3=0 items=0 ppid=1360 pid=1383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:18:54.827533 systemd-timesyncd[1366]: Contacted time server 173.255.255.133:123 (0.flatcar.pool.ntp.org). Feb 12 19:18:54.783000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 19:18:54.827832 systemd-timesyncd[1366]: Initial clock synchronization to Mon 2024-02-12 19:18:54.817500 UTC. Feb 12 19:19:00.521118 ldconfig[1338]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 19:19:00.537794 systemd[1]: Finished ldconfig.service. Feb 12 19:19:00.544487 systemd[1]: Starting systemd-update-done.service... Feb 12 19:19:00.585518 systemd[1]: Finished systemd-update-done.service. Feb 12 19:19:00.592820 systemd[1]: Reached target sysinit.target. Feb 12 19:19:00.597632 systemd[1]: Started motdgen.path. Feb 12 19:19:00.601855 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 19:19:00.608979 systemd[1]: Started logrotate.timer. Feb 12 19:19:00.612924 systemd[1]: Started mdadm.timer. Feb 12 19:19:00.617132 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 19:19:00.622273 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 19:19:00.622385 systemd[1]: Reached target paths.target. Feb 12 19:19:00.626580 systemd[1]: Reached target timers.target. Feb 12 19:19:00.632114 systemd[1]: Listening on dbus.socket. Feb 12 19:19:00.637284 systemd[1]: Starting docker.socket... Feb 12 19:19:00.657741 systemd[1]: Listening on sshd.socket. Feb 12 19:19:00.661894 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:19:00.662426 systemd[1]: Listening on docker.socket. Feb 12 19:19:00.666543 systemd[1]: Reached target sockets.target. Feb 12 19:19:00.670617 systemd[1]: Reached target basic.target. Feb 12 19:19:00.674808 systemd[1]: System is tainted: cgroupsv1 Feb 12 19:19:00.674934 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:19:00.675073 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:19:00.676256 systemd[1]: Starting containerd.service... Feb 12 19:19:00.681934 systemd[1]: Starting dbus.service... Feb 12 19:19:00.686307 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 19:19:00.691912 systemd[1]: Starting extend-filesystems.service... Feb 12 19:19:00.696374 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 19:19:00.697456 systemd[1]: Starting motdgen.service... Feb 12 19:19:00.701744 systemd[1]: Started nvidia.service. Feb 12 19:19:00.706631 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 19:19:00.711797 systemd[1]: Starting prepare-critools.service... Feb 12 19:19:00.717459 systemd[1]: Starting prepare-helm.service... Feb 12 19:19:00.722175 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 19:19:00.728119 systemd[1]: Starting sshd-keygen.service... Feb 12 19:19:00.733845 systemd[1]: Starting systemd-logind.service... Feb 12 19:19:00.739900 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:19:00.739972 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 19:19:00.741321 systemd[1]: Starting update-engine.service... Feb 12 19:19:00.749211 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 19:19:00.759094 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 19:19:00.759420 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 19:19:00.765430 jq[1421]: true Feb 12 19:19:00.765685 jq[1398]: false Feb 12 19:19:00.777983 extend-filesystems[1399]: Found sda Feb 12 19:19:00.777983 extend-filesystems[1399]: Found sda1 Feb 12 19:19:00.777983 extend-filesystems[1399]: Found sda2 Feb 12 19:19:00.777983 extend-filesystems[1399]: Found sda3 Feb 12 19:19:00.777983 extend-filesystems[1399]: Found usr Feb 12 19:19:00.815683 extend-filesystems[1399]: Found sda4 Feb 12 19:19:00.815683 extend-filesystems[1399]: Found sda6 Feb 12 19:19:00.815683 extend-filesystems[1399]: Found sda7 Feb 12 19:19:00.815683 extend-filesystems[1399]: Found sda9 Feb 12 19:19:00.815683 extend-filesystems[1399]: Checking size of /dev/sda9 Feb 12 19:19:00.789386 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 19:19:00.789649 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 19:19:00.791819 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 19:19:00.862451 jq[1434]: true Feb 12 19:19:00.792096 systemd[1]: Finished motdgen.service. Feb 12 19:19:00.825010 systemd-logind[1416]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 12 19:19:00.825316 systemd-logind[1416]: New seat seat0. Feb 12 19:19:00.869564 env[1429]: time="2024-02-12T19:19:00.869520691Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 19:19:00.891635 extend-filesystems[1399]: Old size kept for /dev/sda9 Feb 12 19:19:00.897491 extend-filesystems[1399]: Found sr0 Feb 12 19:19:00.892152 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 19:19:00.919956 env[1429]: time="2024-02-12T19:19:00.910315880Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 19:19:00.919956 env[1429]: time="2024-02-12T19:19:00.910918090Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:19:00.919956 env[1429]: time="2024-02-12T19:19:00.913926538Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:19:00.919956 env[1429]: time="2024-02-12T19:19:00.913957644Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:19:00.919956 env[1429]: time="2024-02-12T19:19:00.914323480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:19:00.919956 env[1429]: time="2024-02-12T19:19:00.914343511Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 19:19:00.919956 env[1429]: time="2024-02-12T19:19:00.914358104Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 19:19:00.919956 env[1429]: time="2024-02-12T19:19:00.914367900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 19:19:00.919956 env[1429]: time="2024-02-12T19:19:00.914448583Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:19:00.919956 env[1429]: time="2024-02-12T19:19:00.914667605Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:19:00.892437 systemd[1]: Finished extend-filesystems.service. Feb 12 19:19:00.920282 env[1429]: time="2024-02-12T19:19:00.914813739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:19:00.920282 env[1429]: time="2024-02-12T19:19:00.914829852Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 19:19:00.920282 env[1429]: time="2024-02-12T19:19:00.914887226Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 19:19:00.920282 env[1429]: time="2024-02-12T19:19:00.914900620Z" level=info msg="metadata content store policy set" policy=shared Feb 12 19:19:00.930198 tar[1423]: ./ Feb 12 19:19:00.930198 tar[1423]: ./macvlan Feb 12 19:19:00.935332 env[1429]: time="2024-02-12T19:19:00.931901741Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 19:19:00.935332 env[1429]: time="2024-02-12T19:19:00.931968631Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 19:19:00.935332 env[1429]: time="2024-02-12T19:19:00.931982305Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 19:19:00.935332 env[1429]: time="2024-02-12T19:19:00.932016689Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 19:19:00.935332 env[1429]: time="2024-02-12T19:19:00.932121282Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 19:19:00.935332 env[1429]: time="2024-02-12T19:19:00.932139834Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 19:19:00.935332 env[1429]: time="2024-02-12T19:19:00.932152228Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 19:19:00.935332 env[1429]: time="2024-02-12T19:19:00.932526460Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 19:19:00.935332 env[1429]: time="2024-02-12T19:19:00.932547811Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 19:19:00.935332 env[1429]: time="2024-02-12T19:19:00.932563604Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 19:19:00.935332 env[1429]: time="2024-02-12T19:19:00.932578317Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 19:19:00.935332 env[1429]: time="2024-02-12T19:19:00.932590831Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 19:19:00.935332 env[1429]: time="2024-02-12T19:19:00.932725291Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 19:19:00.935332 env[1429]: time="2024-02-12T19:19:00.932800377Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 19:19:00.935990 tar[1425]: linux-arm64/helm Feb 12 19:19:00.936323 env[1429]: time="2024-02-12T19:19:00.933096004Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 19:19:00.936323 env[1429]: time="2024-02-12T19:19:00.933119954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 19:19:00.936323 env[1429]: time="2024-02-12T19:19:00.933134707Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 19:19:00.936323 env[1429]: time="2024-02-12T19:19:00.933176488Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 19:19:00.936323 env[1429]: time="2024-02-12T19:19:00.933226386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 19:19:00.936323 env[1429]: time="2024-02-12T19:19:00.933240659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 19:19:00.936323 env[1429]: time="2024-02-12T19:19:00.933252774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 19:19:00.936323 env[1429]: time="2024-02-12T19:19:00.933265408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 19:19:00.936323 env[1429]: time="2024-02-12T19:19:00.933278922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 19:19:00.936323 env[1429]: time="2024-02-12T19:19:00.933289877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 19:19:00.936323 env[1429]: time="2024-02-12T19:19:00.933301152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 19:19:00.936323 env[1429]: time="2024-02-12T19:19:00.933315106Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 19:19:00.936323 env[1429]: time="2024-02-12T19:19:00.933438291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 19:19:00.936323 env[1429]: time="2024-02-12T19:19:00.933455083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 19:19:00.936323 env[1429]: time="2024-02-12T19:19:00.933470156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 19:19:00.936626 tar[1424]: crictl Feb 12 19:19:00.936793 env[1429]: time="2024-02-12T19:19:00.933482271Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 19:19:00.936793 env[1429]: time="2024-02-12T19:19:00.933497584Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 19:19:00.936793 env[1429]: time="2024-02-12T19:19:00.933509019Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 19:19:00.936793 env[1429]: time="2024-02-12T19:19:00.933528250Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 19:19:00.936793 env[1429]: time="2024-02-12T19:19:00.933563834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 19:19:00.936895 env[1429]: time="2024-02-12T19:19:00.933758067Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 19:19:00.936895 env[1429]: time="2024-02-12T19:19:00.933810363Z" level=info msg="Connect containerd service" Feb 12 19:19:00.955687 env[1429]: time="2024-02-12T19:19:00.933887729Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 19:19:00.955687 env[1429]: time="2024-02-12T19:19:00.938457955Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:19:00.955687 env[1429]: time="2024-02-12T19:19:00.938746985Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 19:19:00.955687 env[1429]: time="2024-02-12T19:19:00.938784488Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 19:19:00.955687 env[1429]: time="2024-02-12T19:19:00.939329403Z" level=info msg="Start subscribing containerd event" Feb 12 19:19:00.955687 env[1429]: time="2024-02-12T19:19:00.939397173Z" level=info msg="Start recovering state" Feb 12 19:19:00.955687 env[1429]: time="2024-02-12T19:19:00.939466542Z" level=info msg="Start event monitor" Feb 12 19:19:00.955687 env[1429]: time="2024-02-12T19:19:00.939482615Z" level=info msg="Start snapshots syncer" Feb 12 19:19:00.955687 env[1429]: time="2024-02-12T19:19:00.939492410Z" level=info msg="Start cni network conf syncer for default" Feb 12 19:19:00.955687 env[1429]: time="2024-02-12T19:19:00.939501166Z" level=info msg="Start streaming server" Feb 12 19:19:00.938923 systemd[1]: Started containerd.service. Feb 12 19:19:00.971123 env[1429]: time="2024-02-12T19:19:00.970869711Z" level=info msg="containerd successfully booted in 0.102141s" Feb 12 19:19:00.993162 bash[1477]: Updated "/home/core/.ssh/authorized_keys" Feb 12 19:19:00.994119 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 19:19:01.059412 tar[1423]: ./static Feb 12 19:19:01.119178 dbus-daemon[1397]: [system] SELinux support is enabled Feb 12 19:19:01.125242 dbus-daemon[1397]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 12 19:19:01.119386 systemd[1]: Started dbus.service. Feb 12 19:19:01.124692 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 19:19:01.124713 systemd[1]: Reached target system-config.target. Feb 12 19:19:01.133303 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 19:19:01.133323 systemd[1]: Reached target user-config.target. Feb 12 19:19:01.141295 systemd[1]: Started systemd-logind.service. Feb 12 19:19:01.147663 tar[1423]: ./vlan Feb 12 19:19:01.147245 systemd[1]: nvidia.service: Deactivated successfully. Feb 12 19:19:01.233109 tar[1423]: ./portmap Feb 12 19:19:01.301053 tar[1423]: ./host-local Feb 12 19:19:01.361198 tar[1423]: ./vrf Feb 12 19:19:01.412627 tar[1423]: ./bridge Feb 12 19:19:01.486749 tar[1423]: ./tuning Feb 12 19:19:01.540891 tar[1423]: ./firewall Feb 12 19:19:01.612496 tar[1423]: ./host-device Feb 12 19:19:01.656039 update_engine[1419]: I0212 19:19:01.641316 1419 main.cc:92] Flatcar Update Engine starting Feb 12 19:19:01.682199 tar[1423]: ./sbr Feb 12 19:19:01.710652 systemd[1]: Started update-engine.service. Feb 12 19:19:01.721103 update_engine[1419]: I0212 19:19:01.710754 1419 update_check_scheduler.cc:74] Next update check in 4m59s Feb 12 19:19:01.716855 systemd[1]: Started locksmithd.service. Feb 12 19:19:01.735571 tar[1423]: ./loopback Feb 12 19:19:01.799704 tar[1423]: ./dhcp Feb 12 19:19:01.805126 systemd[1]: Finished prepare-critools.service. Feb 12 19:19:01.816111 tar[1425]: linux-arm64/LICENSE Feb 12 19:19:01.816177 tar[1425]: linux-arm64/README.md Feb 12 19:19:01.823585 systemd[1]: Finished prepare-helm.service. Feb 12 19:19:01.897213 tar[1423]: ./ptp Feb 12 19:19:01.929356 tar[1423]: ./ipvlan Feb 12 19:19:01.960921 tar[1423]: ./bandwidth Feb 12 19:19:02.071824 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 19:19:02.594470 sshd_keygen[1420]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 19:19:02.612451 systemd[1]: Finished sshd-keygen.service. Feb 12 19:19:02.618949 systemd[1]: Starting issuegen.service... Feb 12 19:19:02.624450 systemd[1]: Started waagent.service. Feb 12 19:19:02.629270 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 19:19:02.629495 systemd[1]: Finished issuegen.service. Feb 12 19:19:02.635479 systemd[1]: Starting systemd-user-sessions.service... Feb 12 19:19:02.660091 systemd[1]: Finished systemd-user-sessions.service. Feb 12 19:19:02.668478 systemd[1]: Started getty@tty1.service. Feb 12 19:19:02.675148 systemd[1]: Started serial-getty@ttyAMA0.service. Feb 12 19:19:02.681140 systemd[1]: Reached target getty.target. Feb 12 19:19:02.691236 systemd[1]: Reached target multi-user.target. Feb 12 19:19:02.697474 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 19:19:02.705612 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 19:19:02.705830 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 19:19:02.711572 systemd[1]: Startup finished in 19.553s (kernel) + 25.479s (userspace) = 45.033s. Feb 12 19:19:03.395255 login[1551]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 12 19:19:03.395709 login[1550]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 12 19:19:03.460293 systemd[1]: Created slice user-500.slice. Feb 12 19:19:03.461247 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 19:19:03.463445 systemd-logind[1416]: New session 1 of user core. Feb 12 19:19:03.466119 systemd-logind[1416]: New session 2 of user core. Feb 12 19:19:03.543121 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 19:19:03.544561 systemd[1]: Starting user@500.service... Feb 12 19:19:03.568588 (systemd)[1557]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:19:03.631550 locksmithd[1518]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 19:19:03.823639 systemd[1557]: Queued start job for default target default.target. Feb 12 19:19:03.824528 systemd[1557]: Reached target paths.target. Feb 12 19:19:03.824638 systemd[1557]: Reached target sockets.target. Feb 12 19:19:03.824708 systemd[1557]: Reached target timers.target. Feb 12 19:19:03.824770 systemd[1557]: Reached target basic.target. Feb 12 19:19:03.824876 systemd[1557]: Reached target default.target. Feb 12 19:19:03.824952 systemd[1]: Started user@500.service. Feb 12 19:19:03.825563 systemd[1557]: Startup finished in 250ms. Feb 12 19:19:03.825785 systemd[1]: Started session-1.scope. Feb 12 19:19:03.826319 systemd[1]: Started session-2.scope. Feb 12 19:19:10.281091 waagent[1545]: 2024-02-12T19:19:10.280985Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Feb 12 19:19:10.288975 waagent[1545]: 2024-02-12T19:19:10.288882Z INFO Daemon Daemon OS: flatcar 3510.3.2 Feb 12 19:19:10.293692 waagent[1545]: 2024-02-12T19:19:10.293614Z INFO Daemon Daemon Python: 3.9.16 Feb 12 19:19:10.298250 waagent[1545]: 2024-02-12T19:19:10.298135Z INFO Daemon Daemon Run daemon Feb 12 19:19:10.302644 waagent[1545]: 2024-02-12T19:19:10.302582Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2' Feb 12 19:19:10.319517 waagent[1545]: 2024-02-12T19:19:10.319391Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 12 19:19:10.335825 waagent[1545]: 2024-02-12T19:19:10.335691Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 12 19:19:10.346855 waagent[1545]: 2024-02-12T19:19:10.346764Z INFO Daemon Daemon cloud-init is enabled: False Feb 12 19:19:10.352996 waagent[1545]: 2024-02-12T19:19:10.352913Z INFO Daemon Daemon Using waagent for provisioning Feb 12 19:19:10.359299 waagent[1545]: 2024-02-12T19:19:10.359218Z INFO Daemon Daemon Activate resource disk Feb 12 19:19:10.364038 waagent[1545]: 2024-02-12T19:19:10.363965Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 12 19:19:10.379663 waagent[1545]: 2024-02-12T19:19:10.379581Z INFO Daemon Daemon Found device: None Feb 12 19:19:10.385249 waagent[1545]: 2024-02-12T19:19:10.385146Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 12 19:19:10.394842 waagent[1545]: 2024-02-12T19:19:10.394768Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 12 19:19:10.408213 waagent[1545]: 2024-02-12T19:19:10.408125Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 12 19:19:10.414783 waagent[1545]: 2024-02-12T19:19:10.414712Z INFO Daemon Daemon Running default provisioning handler Feb 12 19:19:10.428245 waagent[1545]: 2024-02-12T19:19:10.428090Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 12 19:19:10.445801 waagent[1545]: 2024-02-12T19:19:10.445665Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 12 19:19:10.456677 waagent[1545]: 2024-02-12T19:19:10.456587Z INFO Daemon Daemon cloud-init is enabled: False Feb 12 19:19:10.462750 waagent[1545]: 2024-02-12T19:19:10.462664Z INFO Daemon Daemon Copying ovf-env.xml Feb 12 19:19:10.557115 waagent[1545]: 2024-02-12T19:19:10.556923Z INFO Daemon Daemon Successfully mounted dvd Feb 12 19:19:10.675260 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 12 19:19:10.699847 waagent[1545]: 2024-02-12T19:19:10.699705Z INFO Daemon Daemon Detect protocol endpoint Feb 12 19:19:10.705067 waagent[1545]: 2024-02-12T19:19:10.704980Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 12 19:19:10.711181 waagent[1545]: 2024-02-12T19:19:10.711102Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 12 19:19:10.717907 waagent[1545]: 2024-02-12T19:19:10.717825Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 12 19:19:10.723524 waagent[1545]: 2024-02-12T19:19:10.723448Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 12 19:19:10.729247 waagent[1545]: 2024-02-12T19:19:10.729151Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 12 19:19:10.860063 waagent[1545]: 2024-02-12T19:19:10.859944Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 12 19:19:10.867106 waagent[1545]: 2024-02-12T19:19:10.867058Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 12 19:19:10.872546 waagent[1545]: 2024-02-12T19:19:10.872481Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 12 19:19:11.548496 waagent[1545]: 2024-02-12T19:19:11.548354Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 12 19:19:11.564782 waagent[1545]: 2024-02-12T19:19:11.564708Z INFO Daemon Daemon Forcing an update of the goal state.. Feb 12 19:19:11.570459 waagent[1545]: 2024-02-12T19:19:11.570397Z INFO Daemon Daemon Fetching goal state [incarnation 1] Feb 12 19:19:11.645328 waagent[1545]: 2024-02-12T19:19:11.645167Z INFO Daemon Daemon Found private key matching thumbprint 44473DB20258995C46DAD1993509002B1EDE7B36 Feb 12 19:19:11.654519 waagent[1545]: 2024-02-12T19:19:11.654439Z INFO Daemon Daemon Certificate with thumbprint A036B5042C0989FB3595FBC5D4618DA93A8AFB39 has no matching private key. Feb 12 19:19:11.664953 waagent[1545]: 2024-02-12T19:19:11.664868Z INFO Daemon Daemon Fetch goal state completed Feb 12 19:19:11.696158 waagent[1545]: 2024-02-12T19:19:11.696102Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: fe8a8d43-31cf-4d1c-9df0-c520fc9e45f8 New eTag: 10326288777671318988] Feb 12 19:19:11.708262 waagent[1545]: 2024-02-12T19:19:11.708164Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Feb 12 19:19:11.724950 waagent[1545]: 2024-02-12T19:19:11.724889Z INFO Daemon Daemon Starting provisioning Feb 12 19:19:11.730769 waagent[1545]: 2024-02-12T19:19:11.730691Z INFO Daemon Daemon Handle ovf-env.xml. Feb 12 19:19:11.735950 waagent[1545]: 2024-02-12T19:19:11.735877Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-7e4be4023b] Feb 12 19:19:11.777305 waagent[1545]: 2024-02-12T19:19:11.777154Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-7e4be4023b] Feb 12 19:19:11.785032 waagent[1545]: 2024-02-12T19:19:11.784943Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 12 19:19:11.791482 waagent[1545]: 2024-02-12T19:19:11.791411Z INFO Daemon Daemon Primary interface is [eth0] Feb 12 19:19:11.807318 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Feb 12 19:19:11.807538 systemd[1]: Stopped systemd-networkd-wait-online.service. Feb 12 19:19:11.807596 systemd[1]: Stopping systemd-networkd-wait-online.service... Feb 12 19:19:11.807786 systemd[1]: Stopping systemd-networkd.service... Feb 12 19:19:11.811239 systemd-networkd[1275]: eth0: DHCPv6 lease lost Feb 12 19:19:11.812801 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 19:19:11.813030 systemd[1]: Stopped systemd-networkd.service. Feb 12 19:19:11.814961 systemd[1]: Starting systemd-networkd.service... Feb 12 19:19:11.845833 systemd-networkd[1604]: enP42088s1: Link UP Feb 12 19:19:11.845844 systemd-networkd[1604]: enP42088s1: Gained carrier Feb 12 19:19:11.846723 systemd-networkd[1604]: eth0: Link UP Feb 12 19:19:11.846733 systemd-networkd[1604]: eth0: Gained carrier Feb 12 19:19:11.847046 systemd-networkd[1604]: lo: Link UP Feb 12 19:19:11.847055 systemd-networkd[1604]: lo: Gained carrier Feb 12 19:19:11.847373 systemd-networkd[1604]: eth0: Gained IPv6LL Feb 12 19:19:11.848445 systemd-networkd[1604]: Enumeration completed Feb 12 19:19:11.848574 systemd[1]: Started systemd-networkd.service. Feb 12 19:19:11.850174 systemd-networkd[1604]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:19:11.850469 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 19:19:11.857656 waagent[1545]: 2024-02-12T19:19:11.857506Z INFO Daemon Daemon Create user account if not exists Feb 12 19:19:11.864145 waagent[1545]: 2024-02-12T19:19:11.864064Z INFO Daemon Daemon User core already exists, skip useradd Feb 12 19:19:11.864302 systemd-networkd[1604]: eth0: DHCPv4 address 10.200.20.34/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 12 19:19:11.871743 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 19:19:11.872306 waagent[1545]: 2024-02-12T19:19:11.872210Z INFO Daemon Daemon Configure sudoer Feb 12 19:19:11.877680 waagent[1545]: 2024-02-12T19:19:11.877593Z INFO Daemon Daemon Configure sshd Feb 12 19:19:11.882216 waagent[1545]: 2024-02-12T19:19:11.882125Z INFO Daemon Daemon Deploy ssh public key. Feb 12 19:19:13.251179 waagent[1545]: 2024-02-12T19:19:13.251109Z INFO Daemon Daemon Provisioning complete Feb 12 19:19:13.295530 waagent[1545]: 2024-02-12T19:19:13.295433Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 12 19:19:13.302426 waagent[1545]: 2024-02-12T19:19:13.302353Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 12 19:19:13.313235 waagent[1545]: 2024-02-12T19:19:13.313148Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Feb 12 19:19:13.617509 waagent[1614]: 2024-02-12T19:19:13.617416Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Feb 12 19:19:13.618613 waagent[1614]: 2024-02-12T19:19:13.618559Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:19:13.618832 waagent[1614]: 2024-02-12T19:19:13.618787Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:19:13.631320 waagent[1614]: 2024-02-12T19:19:13.631246Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Feb 12 19:19:13.631620 waagent[1614]: 2024-02-12T19:19:13.631574Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Feb 12 19:19:13.698999 waagent[1614]: 2024-02-12T19:19:13.698872Z INFO ExtHandler ExtHandler Found private key matching thumbprint 44473DB20258995C46DAD1993509002B1EDE7B36 Feb 12 19:19:13.699379 waagent[1614]: 2024-02-12T19:19:13.699327Z INFO ExtHandler ExtHandler Certificate with thumbprint A036B5042C0989FB3595FBC5D4618DA93A8AFB39 has no matching private key. Feb 12 19:19:13.699696 waagent[1614]: 2024-02-12T19:19:13.699648Z INFO ExtHandler ExtHandler Fetch goal state completed Feb 12 19:19:13.718341 waagent[1614]: 2024-02-12T19:19:13.718285Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 47f04bad-92c0-40d3-95f9-75fcc7f62dbf New eTag: 10326288777671318988] Feb 12 19:19:13.719103 waagent[1614]: 2024-02-12T19:19:13.719047Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Feb 12 19:19:13.782166 waagent[1614]: 2024-02-12T19:19:13.782029Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 12 19:19:13.810678 waagent[1614]: 2024-02-12T19:19:13.810540Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1614 Feb 12 19:19:13.818096 waagent[1614]: 2024-02-12T19:19:13.817999Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 12 19:19:13.821095 waagent[1614]: 2024-02-12T19:19:13.820992Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 12 19:19:13.969428 waagent[1614]: 2024-02-12T19:19:13.969322Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 12 19:19:13.969961 waagent[1614]: 2024-02-12T19:19:13.969907Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 12 19:19:13.977939 waagent[1614]: 2024-02-12T19:19:13.977887Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 12 19:19:13.978630 waagent[1614]: 2024-02-12T19:19:13.978577Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 12 19:19:13.979871 waagent[1614]: 2024-02-12T19:19:13.979813Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Feb 12 19:19:13.981339 waagent[1614]: 2024-02-12T19:19:13.981274Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 12 19:19:13.981691 waagent[1614]: 2024-02-12T19:19:13.981620Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:19:13.982143 waagent[1614]: 2024-02-12T19:19:13.982080Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:19:13.982752 waagent[1614]: 2024-02-12T19:19:13.982691Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 12 19:19:13.983071 waagent[1614]: 2024-02-12T19:19:13.983014Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 12 19:19:13.983071 waagent[1614]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 12 19:19:13.983071 waagent[1614]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Feb 12 19:19:13.983071 waagent[1614]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 12 19:19:13.983071 waagent[1614]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:19:13.983071 waagent[1614]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:19:13.983071 waagent[1614]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:19:13.985204 waagent[1614]: 2024-02-12T19:19:13.985039Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 12 19:19:13.985997 waagent[1614]: 2024-02-12T19:19:13.985928Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:19:13.986172 waagent[1614]: 2024-02-12T19:19:13.986119Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:19:13.986773 waagent[1614]: 2024-02-12T19:19:13.986707Z INFO EnvHandler ExtHandler Configure routes Feb 12 19:19:13.986922 waagent[1614]: 2024-02-12T19:19:13.986876Z INFO EnvHandler ExtHandler Gateway:None Feb 12 19:19:13.987034 waagent[1614]: 2024-02-12T19:19:13.986994Z INFO EnvHandler ExtHandler Routes:None Feb 12 19:19:13.987376 waagent[1614]: 2024-02-12T19:19:13.987308Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 12 19:19:13.987727 waagent[1614]: 2024-02-12T19:19:13.987662Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 12 19:19:13.988704 waagent[1614]: 2024-02-12T19:19:13.988623Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 12 19:19:13.988887 waagent[1614]: 2024-02-12T19:19:13.988820Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 12 19:19:13.989200 waagent[1614]: 2024-02-12T19:19:13.989116Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 12 19:19:13.999605 waagent[1614]: 2024-02-12T19:19:13.999536Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Feb 12 19:19:14.001637 waagent[1614]: 2024-02-12T19:19:14.001575Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 12 19:19:14.002916 waagent[1614]: 2024-02-12T19:19:14.002859Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Feb 12 19:19:14.030428 waagent[1614]: 2024-02-12T19:19:14.030366Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Feb 12 19:19:14.047293 waagent[1614]: 2024-02-12T19:19:14.047144Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1604' Feb 12 19:19:14.159853 waagent[1614]: 2024-02-12T19:19:14.159787Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Feb 12 19:19:14.316513 waagent[1545]: 2024-02-12T19:19:14.316352Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Feb 12 19:19:14.320218 waagent[1545]: 2024-02-12T19:19:14.320145Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Feb 12 19:19:15.469152 waagent[1648]: 2024-02-12T19:19:15.469059Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 12 19:19:15.470172 waagent[1648]: 2024-02-12T19:19:15.470115Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2 Feb 12 19:19:15.470413 waagent[1648]: 2024-02-12T19:19:15.470364Z INFO ExtHandler ExtHandler Python: 3.9.16 Feb 12 19:19:15.478163 waagent[1648]: 2024-02-12T19:19:15.478049Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 12 19:19:15.478736 waagent[1648]: 2024-02-12T19:19:15.478681Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:19:15.478975 waagent[1648]: 2024-02-12T19:19:15.478926Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:19:15.491656 waagent[1648]: 2024-02-12T19:19:15.491579Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 12 19:19:15.500620 waagent[1648]: 2024-02-12T19:19:15.500562Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Feb 12 19:19:15.501844 waagent[1648]: 2024-02-12T19:19:15.501786Z INFO ExtHandler Feb 12 19:19:15.502075 waagent[1648]: 2024-02-12T19:19:15.502027Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: f94a8faf-20e4-489f-842c-1e6a00fb235a eTag: 10326288777671318988 source: Fabric] Feb 12 19:19:15.502917 waagent[1648]: 2024-02-12T19:19:15.502864Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 12 19:19:15.504261 waagent[1648]: 2024-02-12T19:19:15.504176Z INFO ExtHandler Feb 12 19:19:15.504488 waagent[1648]: 2024-02-12T19:19:15.504440Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 12 19:19:15.511014 waagent[1648]: 2024-02-12T19:19:15.510970Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 12 19:19:15.511612 waagent[1648]: 2024-02-12T19:19:15.511568Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 12 19:19:15.548179 waagent[1648]: 2024-02-12T19:19:15.548113Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Feb 12 19:19:15.619233 waagent[1648]: 2024-02-12T19:19:15.619082Z INFO ExtHandler Downloaded certificate {'thumbprint': 'A036B5042C0989FB3595FBC5D4618DA93A8AFB39', 'hasPrivateKey': False} Feb 12 19:19:15.620533 waagent[1648]: 2024-02-12T19:19:15.620471Z INFO ExtHandler Downloaded certificate {'thumbprint': '44473DB20258995C46DAD1993509002B1EDE7B36', 'hasPrivateKey': True} Feb 12 19:19:15.621713 waagent[1648]: 2024-02-12T19:19:15.621655Z INFO ExtHandler Fetch goal state completed Feb 12 19:19:15.648181 waagent[1648]: 2024-02-12T19:19:15.648111Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1648 Feb 12 19:19:15.651793 waagent[1648]: 2024-02-12T19:19:15.651729Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 12 19:19:15.653387 waagent[1648]: 2024-02-12T19:19:15.653330Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 12 19:19:15.658144 waagent[1648]: 2024-02-12T19:19:15.658095Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 12 19:19:15.658680 waagent[1648]: 2024-02-12T19:19:15.658623Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 12 19:19:15.666690 waagent[1648]: 2024-02-12T19:19:15.666638Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 12 19:19:15.667365 waagent[1648]: 2024-02-12T19:19:15.667309Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 12 19:19:15.673319 waagent[1648]: 2024-02-12T19:19:15.673219Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Feb 12 19:19:15.677010 waagent[1648]: 2024-02-12T19:19:15.676954Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 12 19:19:15.678615 waagent[1648]: 2024-02-12T19:19:15.678548Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 12 19:19:15.678885 waagent[1648]: 2024-02-12T19:19:15.678815Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:19:15.679497 waagent[1648]: 2024-02-12T19:19:15.679427Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:19:15.680100 waagent[1648]: 2024-02-12T19:19:15.680031Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 12 19:19:15.680831 waagent[1648]: 2024-02-12T19:19:15.680767Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 12 19:19:15.681005 waagent[1648]: 2024-02-12T19:19:15.680938Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 12 19:19:15.681005 waagent[1648]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 12 19:19:15.681005 waagent[1648]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Feb 12 19:19:15.681005 waagent[1648]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 12 19:19:15.681005 waagent[1648]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:19:15.681005 waagent[1648]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:19:15.681005 waagent[1648]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:19:15.682996 waagent[1648]: 2024-02-12T19:19:15.682778Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:19:15.684146 waagent[1648]: 2024-02-12T19:19:15.684080Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:19:15.684842 waagent[1648]: 2024-02-12T19:19:15.684765Z INFO EnvHandler ExtHandler Configure routes Feb 12 19:19:15.685221 waagent[1648]: 2024-02-12T19:19:15.685108Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 12 19:19:15.685549 waagent[1648]: 2024-02-12T19:19:15.685493Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 12 19:19:15.688005 waagent[1648]: 2024-02-12T19:19:15.687861Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 12 19:19:15.688529 waagent[1648]: 2024-02-12T19:19:15.688446Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 12 19:19:15.688741 waagent[1648]: 2024-02-12T19:19:15.688681Z INFO EnvHandler ExtHandler Gateway:None Feb 12 19:19:15.689313 waagent[1648]: 2024-02-12T19:19:15.689229Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 12 19:19:15.689511 waagent[1648]: 2024-02-12T19:19:15.689454Z INFO EnvHandler ExtHandler Routes:None Feb 12 19:19:15.706654 waagent[1648]: 2024-02-12T19:19:15.706566Z INFO MonitorHandler ExtHandler Network interfaces: Feb 12 19:19:15.706654 waagent[1648]: Executing ['ip', '-a', '-o', 'link']: Feb 12 19:19:15.706654 waagent[1648]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 12 19:19:15.706654 waagent[1648]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b7:a9:57 brd ff:ff:ff:ff:ff:ff Feb 12 19:19:15.706654 waagent[1648]: 3: enP42088s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b7:a9:57 brd ff:ff:ff:ff:ff:ff\ altname enP42088p0s2 Feb 12 19:19:15.706654 waagent[1648]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 12 19:19:15.706654 waagent[1648]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 12 19:19:15.706654 waagent[1648]: 2: eth0 inet 10.200.20.34/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 12 19:19:15.706654 waagent[1648]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 12 19:19:15.706654 waagent[1648]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 12 19:19:15.706654 waagent[1648]: 2: eth0 inet6 fe80::222:48ff:feb7:a957/64 scope link \ valid_lft forever preferred_lft forever Feb 12 19:19:15.707941 waagent[1648]: 2024-02-12T19:19:15.707870Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Feb 12 19:19:15.708315 waagent[1648]: 2024-02-12T19:19:15.708261Z INFO ExtHandler ExtHandler Downloading manifest Feb 12 19:19:15.736627 waagent[1648]: 2024-02-12T19:19:15.736524Z INFO ExtHandler ExtHandler Feb 12 19:19:15.736916 waagent[1648]: 2024-02-12T19:19:15.736862Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: a46663e2-23cd-4e22-b621-ddf4ee8cefd1 correlation 18fc9dfb-1a72-4a65-a46c-588498932ddd created: 2024-02-12T19:17:24.758377Z] Feb 12 19:19:15.737899 waagent[1648]: 2024-02-12T19:19:15.737845Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 12 19:19:15.739799 waagent[1648]: 2024-02-12T19:19:15.739745Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Feb 12 19:19:15.765530 waagent[1648]: 2024-02-12T19:19:15.765453Z INFO ExtHandler ExtHandler Looking for existing remote access users. Feb 12 19:19:15.780572 waagent[1648]: 2024-02-12T19:19:15.780488Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 5CEEC223-2CF2-45B1-9FC0-4E886464CC3A;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Feb 12 19:19:15.960983 waagent[1648]: 2024-02-12T19:19:15.960868Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Feb 12 19:19:15.960983 waagent[1648]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:19:15.960983 waagent[1648]: pkts bytes target prot opt in out source destination Feb 12 19:19:15.960983 waagent[1648]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:19:15.960983 waagent[1648]: pkts bytes target prot opt in out source destination Feb 12 19:19:15.960983 waagent[1648]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:19:15.960983 waagent[1648]: pkts bytes target prot opt in out source destination Feb 12 19:19:15.960983 waagent[1648]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 12 19:19:15.960983 waagent[1648]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 12 19:19:15.960983 waagent[1648]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 12 19:19:15.968249 waagent[1648]: 2024-02-12T19:19:15.968129Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 12 19:19:15.968249 waagent[1648]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:19:15.968249 waagent[1648]: pkts bytes target prot opt in out source destination Feb 12 19:19:15.968249 waagent[1648]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:19:15.968249 waagent[1648]: pkts bytes target prot opt in out source destination Feb 12 19:19:15.968249 waagent[1648]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:19:15.968249 waagent[1648]: pkts bytes target prot opt in out source destination Feb 12 19:19:15.968249 waagent[1648]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 12 19:19:15.968249 waagent[1648]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 12 19:19:15.968249 waagent[1648]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 12 19:19:15.969037 waagent[1648]: 2024-02-12T19:19:15.968990Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 12 19:19:38.587168 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Feb 12 19:19:46.704001 update_engine[1419]: I0212 19:19:46.703960 1419 update_attempter.cc:509] Updating boot flags... Feb 12 19:20:01.768048 systemd[1]: Created slice system-sshd.slice. Feb 12 19:20:01.769240 systemd[1]: Started sshd@0-10.200.20.34:22-10.200.12.6:40470.service. Feb 12 19:20:02.494965 sshd[1764]: Accepted publickey for core from 10.200.12.6 port 40470 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:20:02.514329 sshd[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:20:02.518177 systemd-logind[1416]: New session 3 of user core. Feb 12 19:20:02.518577 systemd[1]: Started session-3.scope. Feb 12 19:20:02.881275 systemd[1]: Started sshd@1-10.200.20.34:22-10.200.12.6:40474.service. Feb 12 19:20:03.299431 sshd[1769]: Accepted publickey for core from 10.200.12.6 port 40474 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:20:03.299740 sshd[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:20:03.303376 systemd-logind[1416]: New session 4 of user core. Feb 12 19:20:03.303740 systemd[1]: Started session-4.scope. Feb 12 19:20:03.598972 sshd[1769]: pam_unix(sshd:session): session closed for user core Feb 12 19:20:03.601988 systemd[1]: sshd@1-10.200.20.34:22-10.200.12.6:40474.service: Deactivated successfully. Feb 12 19:20:03.602969 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 19:20:03.603267 systemd-logind[1416]: Session 4 logged out. Waiting for processes to exit. Feb 12 19:20:03.603933 systemd-logind[1416]: Removed session 4. Feb 12 19:20:03.666955 systemd[1]: Started sshd@2-10.200.20.34:22-10.200.12.6:40482.service. Feb 12 19:20:04.083283 sshd[1776]: Accepted publickey for core from 10.200.12.6 port 40482 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:20:04.084502 sshd[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:20:04.088268 systemd-logind[1416]: New session 5 of user core. Feb 12 19:20:04.088639 systemd[1]: Started session-5.scope. Feb 12 19:20:04.380958 sshd[1776]: pam_unix(sshd:session): session closed for user core Feb 12 19:20:04.383805 systemd[1]: sshd@2-10.200.20.34:22-10.200.12.6:40482.service: Deactivated successfully. Feb 12 19:20:04.385242 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 19:20:04.385819 systemd-logind[1416]: Session 5 logged out. Waiting for processes to exit. Feb 12 19:20:04.386661 systemd-logind[1416]: Removed session 5. Feb 12 19:20:04.454715 systemd[1]: Started sshd@3-10.200.20.34:22-10.200.12.6:40494.service. Feb 12 19:20:04.904685 sshd[1783]: Accepted publickey for core from 10.200.12.6 port 40494 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:20:04.906209 sshd[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:20:04.909717 systemd-logind[1416]: New session 6 of user core. Feb 12 19:20:04.910110 systemd[1]: Started session-6.scope. Feb 12 19:20:05.228698 sshd[1783]: pam_unix(sshd:session): session closed for user core Feb 12 19:20:05.231697 systemd[1]: sshd@3-10.200.20.34:22-10.200.12.6:40494.service: Deactivated successfully. Feb 12 19:20:05.232414 systemd[1]: session-6.scope: Deactivated successfully. Feb 12 19:20:05.233288 systemd-logind[1416]: Session 6 logged out. Waiting for processes to exit. Feb 12 19:20:05.234073 systemd-logind[1416]: Removed session 6. Feb 12 19:20:05.299144 systemd[1]: Started sshd@4-10.200.20.34:22-10.200.12.6:40510.service. Feb 12 19:20:05.714689 sshd[1790]: Accepted publickey for core from 10.200.12.6 port 40510 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:20:05.716315 sshd[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:20:05.719917 systemd-logind[1416]: New session 7 of user core. Feb 12 19:20:05.720384 systemd[1]: Started session-7.scope. Feb 12 19:20:06.302422 sudo[1794]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 19:20:06.302946 sudo[1794]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 19:20:06.998923 systemd[1]: Starting docker.service... Feb 12 19:20:07.068329 env[1809]: time="2024-02-12T19:20:07.068285773Z" level=info msg="Starting up" Feb 12 19:20:07.069895 env[1809]: time="2024-02-12T19:20:07.069809315Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 19:20:07.069895 env[1809]: time="2024-02-12T19:20:07.069890252Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 19:20:07.069988 env[1809]: time="2024-02-12T19:20:07.069909465Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 19:20:07.069988 env[1809]: time="2024-02-12T19:20:07.069920153Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 19:20:07.071655 env[1809]: time="2024-02-12T19:20:07.071631106Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 19:20:07.071696 env[1809]: time="2024-02-12T19:20:07.071654963Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 19:20:07.071696 env[1809]: time="2024-02-12T19:20:07.071671214Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 19:20:07.071696 env[1809]: time="2024-02-12T19:20:07.071681141Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 19:20:07.195586 env[1809]: time="2024-02-12T19:20:07.195548779Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 12 19:20:07.195586 env[1809]: time="2024-02-12T19:20:07.195575998Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 12 19:20:07.195783 env[1809]: time="2024-02-12T19:20:07.195705729Z" level=info msg="Loading containers: start." Feb 12 19:20:07.341217 kernel: Initializing XFRM netlink socket Feb 12 19:20:07.363905 env[1809]: time="2024-02-12T19:20:07.363870705Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 12 19:20:07.506511 systemd-networkd[1604]: docker0: Link UP Feb 12 19:20:07.522932 env[1809]: time="2024-02-12T19:20:07.522903231Z" level=info msg="Loading containers: done." Feb 12 19:20:07.532331 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1877452028-merged.mount: Deactivated successfully. Feb 12 19:20:07.544475 env[1809]: time="2024-02-12T19:20:07.544435610Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 12 19:20:07.544638 env[1809]: time="2024-02-12T19:20:07.544618138Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 12 19:20:07.544736 env[1809]: time="2024-02-12T19:20:07.544718968Z" level=info msg="Daemon has completed initialization" Feb 12 19:20:07.575433 systemd[1]: Started docker.service. Feb 12 19:20:07.580562 env[1809]: time="2024-02-12T19:20:07.580505209Z" level=info msg="API listen on /run/docker.sock" Feb 12 19:20:07.596311 systemd[1]: Reloading. Feb 12 19:20:07.652124 /usr/lib/systemd/system-generators/torcx-generator[1938]: time="2024-02-12T19:20:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:20:07.652519 /usr/lib/systemd/system-generators/torcx-generator[1938]: time="2024-02-12T19:20:07Z" level=info msg="torcx already run" Feb 12 19:20:07.727968 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:20:07.727987 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:20:07.743250 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:20:07.818919 systemd[1]: Started kubelet.service. Feb 12 19:20:07.879377 kubelet[2004]: E0212 19:20:07.879250 2004 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 19:20:07.881608 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:20:07.881768 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:20:12.354226 env[1429]: time="2024-02-12T19:20:12.353964446Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 12 19:20:13.253743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2280128557.mount: Deactivated successfully. Feb 12 19:20:14.909226 env[1429]: time="2024-02-12T19:20:14.909172970Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:14.914345 env[1429]: time="2024-02-12T19:20:14.914312017Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:14.917159 env[1429]: time="2024-02-12T19:20:14.917132205Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:14.920563 env[1429]: time="2024-02-12T19:20:14.920524444Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:14.921347 env[1429]: time="2024-02-12T19:20:14.921319903Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88\"" Feb 12 19:20:14.930942 env[1429]: time="2024-02-12T19:20:14.930909800Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 12 19:20:16.890699 env[1429]: time="2024-02-12T19:20:16.890647389Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:16.896493 env[1429]: time="2024-02-12T19:20:16.896454769Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:16.900352 env[1429]: time="2024-02-12T19:20:16.900307759Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:16.905423 env[1429]: time="2024-02-12T19:20:16.905380457Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:16.906215 env[1429]: time="2024-02-12T19:20:16.906164406Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2\"" Feb 12 19:20:16.916095 env[1429]: time="2024-02-12T19:20:16.916061306Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 12 19:20:18.074715 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 12 19:20:18.074883 systemd[1]: Stopped kubelet.service. Feb 12 19:20:18.076391 systemd[1]: Started kubelet.service. Feb 12 19:20:18.135816 kubelet[2032]: E0212 19:20:18.135765 2032 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 19:20:18.138490 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:20:18.138633 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:20:18.204266 env[1429]: time="2024-02-12T19:20:18.204085793Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:18.211366 env[1429]: time="2024-02-12T19:20:18.211305625Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:18.215720 env[1429]: time="2024-02-12T19:20:18.215687543Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:18.219250 env[1429]: time="2024-02-12T19:20:18.219221419Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:18.219939 env[1429]: time="2024-02-12T19:20:18.219913099Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a\"" Feb 12 19:20:18.229159 env[1429]: time="2024-02-12T19:20:18.229098193Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 12 19:20:19.405021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1775600542.mount: Deactivated successfully. Feb 12 19:20:19.872521 env[1429]: time="2024-02-12T19:20:19.872473198Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:19.877843 env[1429]: time="2024-02-12T19:20:19.877804378Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:19.881341 env[1429]: time="2024-02-12T19:20:19.881308193Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:19.886734 env[1429]: time="2024-02-12T19:20:19.886686277Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:19.887262 env[1429]: time="2024-02-12T19:20:19.887238196Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926\"" Feb 12 19:20:19.895586 env[1429]: time="2024-02-12T19:20:19.895555929Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 12 19:20:20.457376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount704772785.mount: Deactivated successfully. Feb 12 19:20:20.481795 env[1429]: time="2024-02-12T19:20:20.481746145Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:20.490762 env[1429]: time="2024-02-12T19:20:20.490719574Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:20.494787 env[1429]: time="2024-02-12T19:20:20.494742360Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:20.499157 env[1429]: time="2024-02-12T19:20:20.499126204Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:20.499725 env[1429]: time="2024-02-12T19:20:20.499699046Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 12 19:20:20.508044 env[1429]: time="2024-02-12T19:20:20.508002585Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 12 19:20:21.362766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3813601134.mount: Deactivated successfully. Feb 12 19:20:25.021878 env[1429]: time="2024-02-12T19:20:25.021834542Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:25.031335 env[1429]: time="2024-02-12T19:20:25.031301780Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:25.036995 env[1429]: time="2024-02-12T19:20:25.036967004Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:25.042576 env[1429]: time="2024-02-12T19:20:25.042546631Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:25.043372 env[1429]: time="2024-02-12T19:20:25.043346459Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb\"" Feb 12 19:20:25.052751 env[1429]: time="2024-02-12T19:20:25.052721017Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 12 19:20:25.958481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2739271572.mount: Deactivated successfully. Feb 12 19:20:26.341118 env[1429]: time="2024-02-12T19:20:26.341063694Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:26.346855 env[1429]: time="2024-02-12T19:20:26.346815975Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:26.350716 env[1429]: time="2024-02-12T19:20:26.350681616Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:26.354832 env[1429]: time="2024-02-12T19:20:26.354790679Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:26.355467 env[1429]: time="2024-02-12T19:20:26.355440395Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0\"" Feb 12 19:20:28.324734 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 12 19:20:28.324912 systemd[1]: Stopped kubelet.service. Feb 12 19:20:28.326396 systemd[1]: Started kubelet.service. Feb 12 19:20:28.373952 kubelet[2112]: E0212 19:20:28.373899 2112 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 19:20:28.375513 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:20:28.375656 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:20:31.169348 systemd[1]: Stopped kubelet.service. Feb 12 19:20:31.184169 systemd[1]: Reloading. Feb 12 19:20:31.251542 /usr/lib/systemd/system-generators/torcx-generator[2145]: time="2024-02-12T19:20:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:20:31.251891 /usr/lib/systemd/system-generators/torcx-generator[2145]: time="2024-02-12T19:20:31Z" level=info msg="torcx already run" Feb 12 19:20:31.330533 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:20:31.330551 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:20:31.346251 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:20:31.430248 systemd[1]: Started kubelet.service. Feb 12 19:20:31.490864 kubelet[2209]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:20:31.490864 kubelet[2209]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:20:31.491245 kubelet[2209]: I0212 19:20:31.490910 2209 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 19:20:31.492205 kubelet[2209]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:20:31.492205 kubelet[2209]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:20:31.985013 kubelet[2209]: I0212 19:20:31.984976 2209 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 19:20:31.985013 kubelet[2209]: I0212 19:20:31.985004 2209 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 19:20:31.985405 kubelet[2209]: I0212 19:20:31.985382 2209 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 19:20:31.989845 kubelet[2209]: E0212 19:20:31.989822 2209 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.34:6443: connect: connection refused Feb 12 19:20:31.989991 kubelet[2209]: I0212 19:20:31.989980 2209 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:20:31.991475 kubelet[2209]: W0212 19:20:31.991459 2209 machine.go:65] Cannot read vendor id correctly, set empty. Feb 12 19:20:31.991962 kubelet[2209]: I0212 19:20:31.991945 2209 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 19:20:31.992317 kubelet[2209]: I0212 19:20:31.992303 2209 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 19:20:31.992386 kubelet[2209]: I0212 19:20:31.992373 2209 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 19:20:31.992468 kubelet[2209]: I0212 19:20:31.992394 2209 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 19:20:31.992468 kubelet[2209]: I0212 19:20:31.992406 2209 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 19:20:31.992516 kubelet[2209]: I0212 19:20:31.992498 2209 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:20:31.995009 kubelet[2209]: I0212 19:20:31.994988 2209 kubelet.go:398] "Attempting to sync node with API server" Feb 12 19:20:31.995009 kubelet[2209]: I0212 19:20:31.995013 2209 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 19:20:31.995111 kubelet[2209]: I0212 19:20:31.995041 2209 kubelet.go:297] "Adding apiserver pod source" Feb 12 19:20:31.995111 kubelet[2209]: I0212 19:20:31.995051 2209 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 19:20:31.995639 kubelet[2209]: I0212 19:20:31.995622 2209 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 19:20:31.995960 kubelet[2209]: W0212 19:20:31.995943 2209 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 19:20:31.996397 kubelet[2209]: I0212 19:20:31.996380 2209 server.go:1186] "Started kubelet" Feb 12 19:20:32.004558 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 12 19:20:32.004649 kubelet[2209]: W0212 19:20:31.999674 2209 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.20.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Feb 12 19:20:32.004649 kubelet[2209]: E0212 19:20:31.999711 2209 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Feb 12 19:20:32.004649 kubelet[2209]: E0212 19:20:31.999749 2209 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-7e4be4023b.17b333cb4ec371a4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-7e4be4023b", UID:"ci-3510.3.2-a-7e4be4023b", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-7e4be4023b"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 20, 31, 996359076, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 20, 31, 996359076, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.200.20.34:6443/api/v1/namespaces/default/events": dial tcp 10.200.20.34:6443: connect: connection refused'(may retry after sleeping) Feb 12 19:20:32.004778 kubelet[2209]: W0212 19:20:31.999883 2209 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.20.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-7e4be4023b&limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Feb 12 19:20:32.004778 kubelet[2209]: E0212 19:20:31.999906 2209 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-7e4be4023b&limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Feb 12 19:20:32.004778 kubelet[2209]: I0212 19:20:32.000149 2209 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 19:20:32.004778 kubelet[2209]: I0212 19:20:32.000719 2209 server.go:451] "Adding debug handlers to kubelet server" Feb 12 19:20:32.005444 kubelet[2209]: E0212 19:20:32.005423 2209 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 19:20:32.005444 kubelet[2209]: E0212 19:20:32.005447 2209 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 19:20:32.006635 kubelet[2209]: I0212 19:20:32.005857 2209 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 19:20:32.007637 kubelet[2209]: I0212 19:20:32.007337 2209 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 19:20:32.007637 kubelet[2209]: I0212 19:20:32.007409 2209 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 19:20:32.007781 kubelet[2209]: W0212 19:20:32.007742 2209 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.20.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Feb 12 19:20:32.007827 kubelet[2209]: E0212 19:20:32.007784 2209 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Feb 12 19:20:32.008299 kubelet[2209]: E0212 19:20:32.008265 2209 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-7e4be4023b?timeout=10s": dial tcp 10.200.20.34:6443: connect: connection refused Feb 12 19:20:32.112010 kubelet[2209]: I0212 19:20:32.111979 2209 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-7e4be4023b" Feb 12 19:20:32.112884 kubelet[2209]: I0212 19:20:32.112858 2209 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 19:20:32.112972 kubelet[2209]: I0212 19:20:32.112921 2209 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 19:20:32.112972 kubelet[2209]: I0212 19:20:32.112943 2209 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:20:32.113083 kubelet[2209]: E0212 19:20:32.112859 2209 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-3510.3.2-a-7e4be4023b" Feb 12 19:20:32.120364 kubelet[2209]: I0212 19:20:32.120338 2209 policy_none.go:49] "None policy: Start" Feb 12 19:20:32.120963 kubelet[2209]: I0212 19:20:32.120942 2209 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 19:20:32.121020 kubelet[2209]: I0212 19:20:32.120970 2209 state_mem.go:35] "Initializing new in-memory state store" Feb 12 19:20:32.128132 kubelet[2209]: I0212 19:20:32.128110 2209 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 19:20:32.129289 kubelet[2209]: I0212 19:20:32.129270 2209 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 19:20:32.131405 kubelet[2209]: E0212 19:20:32.131370 2209 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-7e4be4023b\" not found" Feb 12 19:20:32.208893 kubelet[2209]: E0212 19:20:32.208860 2209 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-7e4be4023b?timeout=10s": dial tcp 10.200.20.34:6443: connect: connection refused Feb 12 19:20:32.283790 kubelet[2209]: I0212 19:20:32.282743 2209 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 19:20:32.314767 kubelet[2209]: I0212 19:20:32.314739 2209 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-7e4be4023b" Feb 12 19:20:32.315095 kubelet[2209]: E0212 19:20:32.315078 2209 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-3510.3.2-a-7e4be4023b" Feb 12 19:20:32.336004 kubelet[2209]: I0212 19:20:32.335984 2209 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 19:20:32.336136 kubelet[2209]: I0212 19:20:32.336126 2209 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 19:20:32.336236 kubelet[2209]: I0212 19:20:32.336227 2209 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 19:20:32.336348 kubelet[2209]: E0212 19:20:32.336340 2209 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 12 19:20:32.337272 kubelet[2209]: W0212 19:20:32.337232 2209 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.20.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Feb 12 19:20:32.337393 kubelet[2209]: E0212 19:20:32.337383 2209 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Feb 12 19:20:32.436545 kubelet[2209]: I0212 19:20:32.436505 2209 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:20:32.439575 kubelet[2209]: I0212 19:20:32.439546 2209 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:20:32.440723 kubelet[2209]: I0212 19:20:32.440702 2209 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:20:32.442171 kubelet[2209]: I0212 19:20:32.442050 2209 status_manager.go:698] "Failed to get status for pod" podUID=2a241dcf89a80f125870539e0b789f93 pod="kube-system/kube-apiserver-ci-3510.3.2-a-7e4be4023b" err="Get \"https://10.200.20.34:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510.3.2-a-7e4be4023b\": dial tcp 10.200.20.34:6443: connect: connection refused" Feb 12 19:20:32.442282 kubelet[2209]: I0212 19:20:32.442243 2209 status_manager.go:698] "Failed to get status for pod" podUID=2c16d9acc7b4ae0d31b02780d65b345e pod="kube-system/kube-controller-manager-ci-3510.3.2-a-7e4be4023b" err="Get \"https://10.200.20.34:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.2-a-7e4be4023b\": dial tcp 10.200.20.34:6443: connect: connection refused" Feb 12 19:20:32.446438 kubelet[2209]: I0212 19:20:32.446420 2209 status_manager.go:698] "Failed to get status for pod" podUID=11d98af3e9a374ae9620ff6b7d98376b pod="kube-system/kube-scheduler-ci-3510.3.2-a-7e4be4023b" err="Get \"https://10.200.20.34:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510.3.2-a-7e4be4023b\": dial tcp 10.200.20.34:6443: connect: connection refused" Feb 12 19:20:32.511082 kubelet[2209]: I0212 19:20:32.511049 2209 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2a241dcf89a80f125870539e0b789f93-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-7e4be4023b\" (UID: \"2a241dcf89a80f125870539e0b789f93\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-7e4be4023b" Feb 12 19:20:32.511436 kubelet[2209]: I0212 19:20:32.511092 2209 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2c16d9acc7b4ae0d31b02780d65b345e-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-7e4be4023b\" (UID: \"2c16d9acc7b4ae0d31b02780d65b345e\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-7e4be4023b" Feb 12 19:20:32.511436 kubelet[2209]: I0212 19:20:32.511115 2209 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2c16d9acc7b4ae0d31b02780d65b345e-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-7e4be4023b\" (UID: \"2c16d9acc7b4ae0d31b02780d65b345e\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-7e4be4023b" Feb 12 19:20:32.511436 kubelet[2209]: I0212 19:20:32.511136 2209 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2c16d9acc7b4ae0d31b02780d65b345e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-7e4be4023b\" (UID: \"2c16d9acc7b4ae0d31b02780d65b345e\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-7e4be4023b" Feb 12 19:20:32.511436 kubelet[2209]: I0212 19:20:32.511156 2209 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2a241dcf89a80f125870539e0b789f93-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-7e4be4023b\" (UID: \"2a241dcf89a80f125870539e0b789f93\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-7e4be4023b" Feb 12 19:20:32.511436 kubelet[2209]: I0212 19:20:32.511178 2209 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2c16d9acc7b4ae0d31b02780d65b345e-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-7e4be4023b\" (UID: \"2c16d9acc7b4ae0d31b02780d65b345e\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-7e4be4023b" Feb 12 19:20:32.511550 kubelet[2209]: I0212 19:20:32.511229 2209 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2c16d9acc7b4ae0d31b02780d65b345e-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-7e4be4023b\" (UID: \"2c16d9acc7b4ae0d31b02780d65b345e\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-7e4be4023b" Feb 12 19:20:32.511550 kubelet[2209]: I0212 19:20:32.511256 2209 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/11d98af3e9a374ae9620ff6b7d98376b-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-7e4be4023b\" (UID: \"11d98af3e9a374ae9620ff6b7d98376b\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-7e4be4023b" Feb 12 19:20:32.511550 kubelet[2209]: I0212 19:20:32.511286 2209 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2a241dcf89a80f125870539e0b789f93-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-7e4be4023b\" (UID: \"2a241dcf89a80f125870539e0b789f93\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-7e4be4023b" Feb 12 19:20:32.609578 kubelet[2209]: E0212 19:20:32.609546 2209 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-7e4be4023b?timeout=10s": dial tcp 10.200.20.34:6443: connect: connection refused Feb 12 19:20:32.717381 kubelet[2209]: I0212 19:20:32.717358 2209 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-7e4be4023b" Feb 12 19:20:32.717889 kubelet[2209]: E0212 19:20:32.717875 2209 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-3510.3.2-a-7e4be4023b" Feb 12 19:20:32.745639 env[1429]: time="2024-02-12T19:20:32.745598159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-7e4be4023b,Uid:2a241dcf89a80f125870539e0b789f93,Namespace:kube-system,Attempt:0,}" Feb 12 19:20:32.746258 env[1429]: time="2024-02-12T19:20:32.746135076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-7e4be4023b,Uid:11d98af3e9a374ae9620ff6b7d98376b,Namespace:kube-system,Attempt:0,}" Feb 12 19:20:32.747631 env[1429]: time="2024-02-12T19:20:32.747601014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-7e4be4023b,Uid:2c16d9acc7b4ae0d31b02780d65b345e,Namespace:kube-system,Attempt:0,}" Feb 12 19:20:32.820544 kubelet[2209]: W0212 19:20:32.820496 2209 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.20.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Feb 12 19:20:32.820703 kubelet[2209]: E0212 19:20:32.820692 2209 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Feb 12 19:20:32.876339 kubelet[2209]: W0212 19:20:32.875977 2209 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.20.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Feb 12 19:20:32.876339 kubelet[2209]: E0212 19:20:32.876034 2209 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Feb 12 19:20:33.380223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount436555671.mount: Deactivated successfully. Feb 12 19:20:33.406417 env[1429]: time="2024-02-12T19:20:33.406367076Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:33.410316 kubelet[2209]: E0212 19:20:33.410277 2209 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-7e4be4023b?timeout=10s": dial tcp 10.200.20.34:6443: connect: connection refused Feb 12 19:20:33.410487 env[1429]: time="2024-02-12T19:20:33.410446539Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:33.417117 env[1429]: time="2024-02-12T19:20:33.417080798Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:33.423760 env[1429]: time="2024-02-12T19:20:33.423717658Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:33.427497 env[1429]: time="2024-02-12T19:20:33.427456958Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:33.433446 env[1429]: time="2024-02-12T19:20:33.433403531Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:33.437029 env[1429]: time="2024-02-12T19:20:33.436987496Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:33.440440 env[1429]: time="2024-02-12T19:20:33.440412404Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:33.444554 env[1429]: time="2024-02-12T19:20:33.444526279Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:33.448895 env[1429]: time="2024-02-12T19:20:33.448850550Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:33.453718 env[1429]: time="2024-02-12T19:20:33.453687764Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:33.471158 env[1429]: time="2024-02-12T19:20:33.471119015Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:33.525858 kubelet[2209]: I0212 19:20:33.525567 2209 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-7e4be4023b" Feb 12 19:20:33.526253 kubelet[2209]: E0212 19:20:33.525927 2209 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-3510.3.2-a-7e4be4023b" Feb 12 19:20:33.528808 env[1429]: time="2024-02-12T19:20:33.528737275Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:20:33.531124 env[1429]: time="2024-02-12T19:20:33.528817264Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:20:33.531124 env[1429]: time="2024-02-12T19:20:33.528845474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:20:33.531124 env[1429]: time="2024-02-12T19:20:33.529035983Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6681269274adf3d150f241a2bb7bfefd4b47376597ec37e5dd1cebc2a6589438 pid=2285 runtime=io.containerd.runc.v2 Feb 12 19:20:33.531302 kubelet[2209]: W0212 19:20:33.529451 2209 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.20.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-7e4be4023b&limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Feb 12 19:20:33.531302 kubelet[2209]: E0212 19:20:33.529507 2209 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-7e4be4023b&limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Feb 12 19:20:33.534777 env[1429]: time="2024-02-12T19:20:33.534714379Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:20:33.534863 env[1429]: time="2024-02-12T19:20:33.534788925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:20:33.534863 env[1429]: time="2024-02-12T19:20:33.534818456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:20:33.535050 env[1429]: time="2024-02-12T19:20:33.535016887Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b241702712f5a72a2597890096a18ca9bf1980e735679fcb4373cadce96423d7 pid=2300 runtime=io.containerd.runc.v2 Feb 12 19:20:33.567294 env[1429]: time="2024-02-12T19:20:33.566255049Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:20:33.567294 env[1429]: time="2024-02-12T19:20:33.566302706Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:20:33.567294 env[1429]: time="2024-02-12T19:20:33.566313790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:20:33.567294 env[1429]: time="2024-02-12T19:20:33.566433513Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5792cd430cc44b01f51a6bf3c7f1eeac27d266370a799ea51b0fc0e6f28d3ba1 pid=2334 runtime=io.containerd.runc.v2 Feb 12 19:20:33.594062 env[1429]: time="2024-02-12T19:20:33.589688091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-7e4be4023b,Uid:2c16d9acc7b4ae0d31b02780d65b345e,Namespace:kube-system,Attempt:0,} returns sandbox id \"6681269274adf3d150f241a2bb7bfefd4b47376597ec37e5dd1cebc2a6589438\"" Feb 12 19:20:33.602740 env[1429]: time="2024-02-12T19:20:33.601587118Z" level=info msg="CreateContainer within sandbox \"6681269274adf3d150f241a2bb7bfefd4b47376597ec37e5dd1cebc2a6589438\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 12 19:20:33.614323 env[1429]: time="2024-02-12T19:20:33.614285991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-7e4be4023b,Uid:2a241dcf89a80f125870539e0b789f93,Namespace:kube-system,Attempt:0,} returns sandbox id \"b241702712f5a72a2597890096a18ca9bf1980e735679fcb4373cadce96423d7\"" Feb 12 19:20:33.618609 env[1429]: time="2024-02-12T19:20:33.618553802Z" level=info msg="CreateContainer within sandbox \"b241702712f5a72a2597890096a18ca9bf1980e735679fcb4373cadce96423d7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 12 19:20:33.624235 env[1429]: time="2024-02-12T19:20:33.624174377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-7e4be4023b,Uid:11d98af3e9a374ae9620ff6b7d98376b,Namespace:kube-system,Attempt:0,} returns sandbox id \"5792cd430cc44b01f51a6bf3c7f1eeac27d266370a799ea51b0fc0e6f28d3ba1\"" Feb 12 19:20:33.626462 env[1429]: time="2024-02-12T19:20:33.626414941Z" level=info msg="CreateContainer within sandbox \"5792cd430cc44b01f51a6bf3c7f1eeac27d266370a799ea51b0fc0e6f28d3ba1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 12 19:20:33.666571 env[1429]: time="2024-02-12T19:20:33.666466542Z" level=info msg="CreateContainer within sandbox \"6681269274adf3d150f241a2bb7bfefd4b47376597ec37e5dd1cebc2a6589438\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0c5785ca684baf005fd71c68c403c4b4235e62e9f0dc853af3b49e2f43cff789\"" Feb 12 19:20:33.667805 env[1429]: time="2024-02-12T19:20:33.667779613Z" level=info msg="StartContainer for \"0c5785ca684baf005fd71c68c403c4b4235e62e9f0dc853af3b49e2f43cff789\"" Feb 12 19:20:33.701566 env[1429]: time="2024-02-12T19:20:33.701521032Z" level=info msg="CreateContainer within sandbox \"b241702712f5a72a2597890096a18ca9bf1980e735679fcb4373cadce96423d7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1ced97aee4e124242bc34959a31ec3c31c20d7d6bc7b6c16743bf89090c524cc\"" Feb 12 19:20:33.702271 env[1429]: time="2024-02-12T19:20:33.702249333Z" level=info msg="StartContainer for \"1ced97aee4e124242bc34959a31ec3c31c20d7d6bc7b6c16743bf89090c524cc\"" Feb 12 19:20:33.712318 env[1429]: time="2024-02-12T19:20:33.712273688Z" level=info msg="CreateContainer within sandbox \"5792cd430cc44b01f51a6bf3c7f1eeac27d266370a799ea51b0fc0e6f28d3ba1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6022826fe8999c4b5a4e8a13befc417e2ada333a6ab60422a21b230a04a9cb20\"" Feb 12 19:20:33.712951 env[1429]: time="2024-02-12T19:20:33.712922561Z" level=info msg="StartContainer for \"6022826fe8999c4b5a4e8a13befc417e2ada333a6ab60422a21b230a04a9cb20\"" Feb 12 19:20:33.723354 env[1429]: time="2024-02-12T19:20:33.723312206Z" level=info msg="StartContainer for \"0c5785ca684baf005fd71c68c403c4b4235e62e9f0dc853af3b49e2f43cff789\" returns successfully" Feb 12 19:20:33.806898 env[1429]: time="2024-02-12T19:20:33.806859805Z" level=info msg="StartContainer for \"1ced97aee4e124242bc34959a31ec3c31c20d7d6bc7b6c16743bf89090c524cc\" returns successfully" Feb 12 19:20:33.828370 env[1429]: time="2024-02-12T19:20:33.828326422Z" level=info msg="StartContainer for \"6022826fe8999c4b5a4e8a13befc417e2ada333a6ab60422a21b230a04a9cb20\" returns successfully" Feb 12 19:20:35.127878 kubelet[2209]: I0212 19:20:35.127847 2209 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-7e4be4023b" Feb 12 19:20:36.741560 kubelet[2209]: E0212 19:20:36.741522 2209 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.2-a-7e4be4023b\" not found" node="ci-3510.3.2-a-7e4be4023b" Feb 12 19:20:36.815916 kubelet[2209]: I0212 19:20:36.815863 2209 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-7e4be4023b" Feb 12 19:20:36.879518 kubelet[2209]: E0212 19:20:36.879418 2209 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-7e4be4023b.17b333cb4ec371a4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-7e4be4023b", UID:"ci-3510.3.2-a-7e4be4023b", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-7e4be4023b"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 20, 31, 996359076, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 20, 31, 996359076, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:20:36.940718 kubelet[2209]: E0212 19:20:36.940615 2209 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-7e4be4023b.17b333cb4f4df993", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-7e4be4023b", UID:"ci-3510.3.2-a-7e4be4023b", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-7e4be4023b"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 20, 32, 5437843, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 20, 32, 5437843, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:20:36.993906 kubelet[2209]: E0212 19:20:36.993733 2209 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-7e4be4023b.17b333cb55a6f51a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-7e4be4023b", UID:"ci-3510.3.2-a-7e4be4023b", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510.3.2-a-7e4be4023b status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-7e4be4023b"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 20, 32, 111932698, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 20, 32, 111932698, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:20:37.002476 kubelet[2209]: I0212 19:20:37.002435 2209 apiserver.go:52] "Watching apiserver" Feb 12 19:20:37.008199 kubelet[2209]: I0212 19:20:37.008149 2209 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 19:20:37.034316 kubelet[2209]: I0212 19:20:37.034277 2209 reconciler.go:41] "Reconciler: start to sync state" Feb 12 19:20:37.049706 kubelet[2209]: E0212 19:20:37.049611 2209 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-7e4be4023b.17b333cb55a6f51a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-7e4be4023b", UID:"ci-3510.3.2-a-7e4be4023b", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510.3.2-a-7e4be4023b status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-7e4be4023b"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 20, 32, 111932698, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 20, 32, 111932738, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:20:37.102777 kubelet[2209]: E0212 19:20:37.102679 2209 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-7e4be4023b.17b333cb55a70a34", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-7e4be4023b", UID:"ci-3510.3.2-a-7e4be4023b", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510.3.2-a-7e4be4023b status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-7e4be4023b"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 20, 32, 111938100, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 20, 32, 111938100, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:20:37.158585 kubelet[2209]: E0212 19:20:37.158481 2209 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-7e4be4023b.17b333cb55a70a34", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-7e4be4023b", UID:"ci-3510.3.2-a-7e4be4023b", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510.3.2-a-7e4be4023b status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-7e4be4023b"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 20, 32, 111938100, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 20, 32, 111938420, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:20:37.212421 kubelet[2209]: E0212 19:20:37.212339 2209 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-7e4be4023b.17b333cb55a7172d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-7e4be4023b", UID:"ci-3510.3.2-a-7e4be4023b", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ci-3510.3.2-a-7e4be4023b status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-7e4be4023b"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 20, 32, 111941421, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 20, 32, 111941421, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:20:37.267713 kubelet[2209]: E0212 19:20:37.267563 2209 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-7e4be4023b.17b333cb55a7172d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-7e4be4023b", UID:"ci-3510.3.2-a-7e4be4023b", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ci-3510.3.2-a-7e4be4023b status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-7e4be4023b"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 20, 32, 111941421, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 20, 32, 111941701, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:20:37.321115 kubelet[2209]: E0212 19:20:37.321006 2209 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-7e4be4023b.17b333cb56ac108c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-7e4be4023b", UID:"ci-3510.3.2-a-7e4be4023b", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-7e4be4023b"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 20, 32, 129044620, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 20, 32, 129044620, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:20:37.534661 kubelet[2209]: E0212 19:20:37.534497 2209 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-7e4be4023b.17b333cb55a6f51a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-7e4be4023b", UID:"ci-3510.3.2-a-7e4be4023b", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510.3.2-a-7e4be4023b status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-7e4be4023b"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 20, 32, 111932698, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 20, 32, 314700455, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:20:37.933863 kubelet[2209]: E0212 19:20:37.933769 2209 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-7e4be4023b.17b333cb55a70a34", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-7e4be4023b", UID:"ci-3510.3.2-a-7e4be4023b", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510.3.2-a-7e4be4023b status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-7e4be4023b"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 20, 32, 111938100, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 20, 32, 314704896, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:20:39.539673 systemd[1]: Reloading. Feb 12 19:20:39.615413 /usr/lib/systemd/system-generators/torcx-generator[2531]: time="2024-02-12T19:20:39Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:20:39.615779 /usr/lib/systemd/system-generators/torcx-generator[2531]: time="2024-02-12T19:20:39Z" level=info msg="torcx already run" Feb 12 19:20:39.702921 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:20:39.702941 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:20:39.718548 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:20:39.823946 kubelet[2209]: I0212 19:20:39.823814 2209 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:20:39.824579 systemd[1]: Stopping kubelet.service... Feb 12 19:20:39.836602 systemd[1]: kubelet.service: Deactivated successfully. Feb 12 19:20:39.836976 systemd[1]: Stopped kubelet.service. Feb 12 19:20:39.839581 systemd[1]: Started kubelet.service. Feb 12 19:20:39.917236 kubelet[2600]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:20:39.917236 kubelet[2600]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:20:39.917555 kubelet[2600]: I0212 19:20:39.917267 2600 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 19:20:39.918450 kubelet[2600]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:20:39.918450 kubelet[2600]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:20:39.921495 kubelet[2600]: I0212 19:20:39.921474 2600 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 19:20:39.921601 kubelet[2600]: I0212 19:20:39.921592 2600 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 19:20:39.921833 kubelet[2600]: I0212 19:20:39.921820 2600 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 19:20:39.923055 kubelet[2600]: I0212 19:20:39.923038 2600 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 12 19:20:39.927241 kubelet[2600]: W0212 19:20:39.927222 2600 machine.go:65] Cannot read vendor id correctly, set empty. Feb 12 19:20:39.928386 kubelet[2600]: I0212 19:20:39.928364 2600 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 19:20:39.928749 kubelet[2600]: I0212 19:20:39.928730 2600 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 19:20:39.928825 kubelet[2600]: I0212 19:20:39.928809 2600 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 19:20:39.928893 kubelet[2600]: I0212 19:20:39.928833 2600 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 19:20:39.928893 kubelet[2600]: I0212 19:20:39.928844 2600 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 19:20:39.928893 kubelet[2600]: I0212 19:20:39.928873 2600 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:20:39.929149 kubelet[2600]: I0212 19:20:39.927740 2600 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:20:39.932546 kubelet[2600]: I0212 19:20:39.932522 2600 kubelet.go:398] "Attempting to sync node with API server" Feb 12 19:20:39.932596 kubelet[2600]: I0212 19:20:39.932551 2600 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 19:20:39.932596 kubelet[2600]: I0212 19:20:39.932582 2600 kubelet.go:297] "Adding apiserver pod source" Feb 12 19:20:39.932596 kubelet[2600]: I0212 19:20:39.932593 2600 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 19:20:39.935736 kubelet[2600]: I0212 19:20:39.935704 2600 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 19:20:39.936425 kubelet[2600]: I0212 19:20:39.936410 2600 server.go:1186] "Started kubelet" Feb 12 19:20:39.938926 kubelet[2600]: I0212 19:20:39.938909 2600 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 19:20:39.958216 kubelet[2600]: E0212 19:20:39.951387 2600 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 19:20:39.958216 kubelet[2600]: E0212 19:20:39.951418 2600 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 19:20:39.958506 kubelet[2600]: I0212 19:20:39.958487 2600 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 19:20:39.966236 kubelet[2600]: I0212 19:20:39.959146 2600 server.go:451] "Adding debug handlers to kubelet server" Feb 12 19:20:39.967629 sudo[2616]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 12 19:20:39.967824 sudo[2616]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 12 19:20:39.970638 kubelet[2600]: I0212 19:20:39.970616 2600 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 19:20:39.973304 kubelet[2600]: I0212 19:20:39.973248 2600 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 19:20:40.083315 kubelet[2600]: I0212 19:20:40.083290 2600 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 19:20:40.088291 kubelet[2600]: I0212 19:20:40.088268 2600 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-7e4be4023b" Feb 12 19:20:40.100863 kubelet[2600]: I0212 19:20:40.100839 2600 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-7e4be4023b" Feb 12 19:20:40.104420 kubelet[2600]: I0212 19:20:40.104395 2600 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-7e4be4023b" Feb 12 19:20:40.140424 kubelet[2600]: I0212 19:20:40.140396 2600 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 19:20:40.140590 kubelet[2600]: I0212 19:20:40.140580 2600 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 19:20:40.140657 kubelet[2600]: I0212 19:20:40.140648 2600 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 19:20:40.140752 kubelet[2600]: E0212 19:20:40.140743 2600 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 12 19:20:40.144701 kubelet[2600]: I0212 19:20:40.144675 2600 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 19:20:40.144848 kubelet[2600]: I0212 19:20:40.144838 2600 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 19:20:40.144992 kubelet[2600]: I0212 19:20:40.144982 2600 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:20:40.145181 kubelet[2600]: I0212 19:20:40.145169 2600 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 12 19:20:40.145307 kubelet[2600]: I0212 19:20:40.145295 2600 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 12 19:20:40.145358 kubelet[2600]: I0212 19:20:40.145349 2600 policy_none.go:49] "None policy: Start" Feb 12 19:20:40.146403 kubelet[2600]: I0212 19:20:40.146383 2600 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 19:20:40.146516 kubelet[2600]: I0212 19:20:40.146506 2600 state_mem.go:35] "Initializing new in-memory state store" Feb 12 19:20:40.146727 kubelet[2600]: I0212 19:20:40.146715 2600 state_mem.go:75] "Updated machine memory state" Feb 12 19:20:40.148031 kubelet[2600]: I0212 19:20:40.148014 2600 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 19:20:40.152712 kubelet[2600]: I0212 19:20:40.152688 2600 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 19:20:40.240914 kubelet[2600]: I0212 19:20:40.240879 2600 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:20:40.241147 kubelet[2600]: I0212 19:20:40.241132 2600 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:20:40.241284 kubelet[2600]: I0212 19:20:40.241271 2600 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:20:40.274591 kubelet[2600]: I0212 19:20:40.274562 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2a241dcf89a80f125870539e0b789f93-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-7e4be4023b\" (UID: \"2a241dcf89a80f125870539e0b789f93\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-7e4be4023b" Feb 12 19:20:40.274776 kubelet[2600]: I0212 19:20:40.274764 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2c16d9acc7b4ae0d31b02780d65b345e-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-7e4be4023b\" (UID: \"2c16d9acc7b4ae0d31b02780d65b345e\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-7e4be4023b" Feb 12 19:20:40.274849 kubelet[2600]: I0212 19:20:40.274840 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2c16d9acc7b4ae0d31b02780d65b345e-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-7e4be4023b\" (UID: \"2c16d9acc7b4ae0d31b02780d65b345e\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-7e4be4023b" Feb 12 19:20:40.274916 kubelet[2600]: I0212 19:20:40.274908 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2c16d9acc7b4ae0d31b02780d65b345e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-7e4be4023b\" (UID: \"2c16d9acc7b4ae0d31b02780d65b345e\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-7e4be4023b" Feb 12 19:20:40.275040 kubelet[2600]: I0212 19:20:40.275029 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/11d98af3e9a374ae9620ff6b7d98376b-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-7e4be4023b\" (UID: \"11d98af3e9a374ae9620ff6b7d98376b\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-7e4be4023b" Feb 12 19:20:40.275386 kubelet[2600]: I0212 19:20:40.275369 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2a241dcf89a80f125870539e0b789f93-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-7e4be4023b\" (UID: \"2a241dcf89a80f125870539e0b789f93\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-7e4be4023b" Feb 12 19:20:40.275520 kubelet[2600]: I0212 19:20:40.275509 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2a241dcf89a80f125870539e0b789f93-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-7e4be4023b\" (UID: \"2a241dcf89a80f125870539e0b789f93\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-7e4be4023b" Feb 12 19:20:40.275620 kubelet[2600]: I0212 19:20:40.275611 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2c16d9acc7b4ae0d31b02780d65b345e-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-7e4be4023b\" (UID: \"2c16d9acc7b4ae0d31b02780d65b345e\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-7e4be4023b" Feb 12 19:20:40.275708 kubelet[2600]: I0212 19:20:40.275699 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2c16d9acc7b4ae0d31b02780d65b345e-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-7e4be4023b\" (UID: \"2c16d9acc7b4ae0d31b02780d65b345e\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-7e4be4023b" Feb 12 19:20:40.508322 sudo[2616]: pam_unix(sudo:session): session closed for user root Feb 12 19:20:40.935307 kubelet[2600]: I0212 19:20:40.935267 2600 apiserver.go:52] "Watching apiserver" Feb 12 19:20:40.973594 kubelet[2600]: I0212 19:20:40.973556 2600 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 19:20:40.980585 kubelet[2600]: I0212 19:20:40.980560 2600 reconciler.go:41] "Reconciler: start to sync state" Feb 12 19:20:41.181479 kubelet[2600]: E0212 19:20:41.181448 2600 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-a-7e4be4023b\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.2-a-7e4be4023b" Feb 12 19:20:41.541461 kubelet[2600]: E0212 19:20:41.541431 2600 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-7e4be4023b\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-7e4be4023b" Feb 12 19:20:41.742631 kubelet[2600]: E0212 19:20:41.742604 2600 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-7e4be4023b\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-7e4be4023b" Feb 12 19:20:42.239799 sudo[1794]: pam_unix(sudo:session): session closed for user root Feb 12 19:20:42.310496 sshd[1790]: pam_unix(sshd:session): session closed for user core Feb 12 19:20:42.314072 systemd[1]: sshd@4-10.200.20.34:22-10.200.12.6:40510.service: Deactivated successfully. Feb 12 19:20:42.314946 systemd[1]: session-7.scope: Deactivated successfully. Feb 12 19:20:42.316066 systemd-logind[1416]: Session 7 logged out. Waiting for processes to exit. Feb 12 19:20:42.316911 systemd-logind[1416]: Removed session 7. Feb 12 19:20:42.339780 kubelet[2600]: I0212 19:20:42.339748 2600 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-7e4be4023b" podStartSLOduration=2.33969056 pod.CreationTimestamp="2024-02-12 19:20:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:20:42.339592331 +0000 UTC m=+2.488226701" watchObservedRunningTime="2024-02-12 19:20:42.33969056 +0000 UTC m=+2.488324930" Feb 12 19:20:42.340283 kubelet[2600]: I0212 19:20:42.340268 2600 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-7e4be4023b" podStartSLOduration=2.340242801 pod.CreationTimestamp="2024-02-12 19:20:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:20:41.947228184 +0000 UTC m=+2.095862594" watchObservedRunningTime="2024-02-12 19:20:42.340242801 +0000 UTC m=+2.488877171" Feb 12 19:20:48.655732 kubelet[2600]: I0212 19:20:48.655585 2600 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-7e4be4023b" podStartSLOduration=8.655552432 pod.CreationTimestamp="2024-02-12 19:20:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:20:42.739320809 +0000 UTC m=+2.887955179" watchObservedRunningTime="2024-02-12 19:20:48.655552432 +0000 UTC m=+8.804186802" Feb 12 19:20:52.630861 kubelet[2600]: I0212 19:20:52.630839 2600 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 12 19:20:52.631704 env[1429]: time="2024-02-12T19:20:52.631603286Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 19:20:52.632134 kubelet[2600]: I0212 19:20:52.632118 2600 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 12 19:20:53.487728 kubelet[2600]: I0212 19:20:53.487689 2600 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:20:53.497814 kubelet[2600]: I0212 19:20:53.497765 2600 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:20:53.498132 kubelet[2600]: W0212 19:20:53.498113 2600 reflector.go:424] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510.3.2-a-7e4be4023b" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-7e4be4023b' and this object Feb 12 19:20:53.498260 kubelet[2600]: E0212 19:20:53.498247 2600 reflector.go:140] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510.3.2-a-7e4be4023b" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-7e4be4023b' and this object Feb 12 19:20:53.498382 kubelet[2600]: W0212 19:20:53.498369 2600 reflector.go:424] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510.3.2-a-7e4be4023b" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-7e4be4023b' and this object Feb 12 19:20:53.498467 kubelet[2600]: E0212 19:20:53.498458 2600 reflector.go:140] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510.3.2-a-7e4be4023b" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-7e4be4023b' and this object Feb 12 19:20:53.622023 kubelet[2600]: I0212 19:20:53.621991 2600 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:20:53.643459 kubelet[2600]: I0212 19:20:53.643431 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-lib-modules\") pod \"cilium-np8dt\" (UID: \"d0918d1e-df63-47d5-9fe2-7a7a7dab1d43\") " pod="kube-system/cilium-np8dt" Feb 12 19:20:53.643943 kubelet[2600]: I0212 19:20:53.643928 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-cilium-config-path\") pod \"cilium-np8dt\" (UID: \"d0918d1e-df63-47d5-9fe2-7a7a7dab1d43\") " pod="kube-system/cilium-np8dt" Feb 12 19:20:53.644049 kubelet[2600]: I0212 19:20:53.644040 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-host-proc-sys-net\") pod \"cilium-np8dt\" (UID: \"d0918d1e-df63-47d5-9fe2-7a7a7dab1d43\") " pod="kube-system/cilium-np8dt" Feb 12 19:20:53.644166 kubelet[2600]: I0212 19:20:53.644156 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/49aacb68-7686-4159-a78f-1af5d081d919-kube-proxy\") pod \"kube-proxy-htwxf\" (UID: \"49aacb68-7686-4159-a78f-1af5d081d919\") " pod="kube-system/kube-proxy-htwxf" Feb 12 19:20:53.644453 kubelet[2600]: I0212 19:20:53.644436 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/49aacb68-7686-4159-a78f-1af5d081d919-lib-modules\") pod \"kube-proxy-htwxf\" (UID: \"49aacb68-7686-4159-a78f-1af5d081d919\") " pod="kube-system/kube-proxy-htwxf" Feb 12 19:20:53.644588 kubelet[2600]: I0212 19:20:53.644578 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-cilium-cgroup\") pod \"cilium-np8dt\" (UID: \"d0918d1e-df63-47d5-9fe2-7a7a7dab1d43\") " pod="kube-system/cilium-np8dt" Feb 12 19:20:53.644694 kubelet[2600]: I0212 19:20:53.644685 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-cni-path\") pod \"cilium-np8dt\" (UID: \"d0918d1e-df63-47d5-9fe2-7a7a7dab1d43\") " pod="kube-system/cilium-np8dt" Feb 12 19:20:53.644803 kubelet[2600]: I0212 19:20:53.644793 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-982ss\" (UniqueName: \"kubernetes.io/projected/49aacb68-7686-4159-a78f-1af5d081d919-kube-api-access-982ss\") pod \"kube-proxy-htwxf\" (UID: \"49aacb68-7686-4159-a78f-1af5d081d919\") " pod="kube-system/kube-proxy-htwxf" Feb 12 19:20:53.644921 kubelet[2600]: I0212 19:20:53.644911 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-bpf-maps\") pod \"cilium-np8dt\" (UID: \"d0918d1e-df63-47d5-9fe2-7a7a7dab1d43\") " pod="kube-system/cilium-np8dt" Feb 12 19:20:53.645026 kubelet[2600]: I0212 19:20:53.645017 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-host-proc-sys-kernel\") pod \"cilium-np8dt\" (UID: \"d0918d1e-df63-47d5-9fe2-7a7a7dab1d43\") " pod="kube-system/cilium-np8dt" Feb 12 19:20:53.645131 kubelet[2600]: I0212 19:20:53.645121 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-hubble-tls\") pod \"cilium-np8dt\" (UID: \"d0918d1e-df63-47d5-9fe2-7a7a7dab1d43\") " pod="kube-system/cilium-np8dt" Feb 12 19:20:53.645262 kubelet[2600]: I0212 19:20:53.645241 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-xtables-lock\") pod \"cilium-np8dt\" (UID: \"d0918d1e-df63-47d5-9fe2-7a7a7dab1d43\") " pod="kube-system/cilium-np8dt" Feb 12 19:20:53.645318 kubelet[2600]: I0212 19:20:53.645287 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-cilium-run\") pod \"cilium-np8dt\" (UID: \"d0918d1e-df63-47d5-9fe2-7a7a7dab1d43\") " pod="kube-system/cilium-np8dt" Feb 12 19:20:53.645346 kubelet[2600]: I0212 19:20:53.645325 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-etc-cni-netd\") pod \"cilium-np8dt\" (UID: \"d0918d1e-df63-47d5-9fe2-7a7a7dab1d43\") " pod="kube-system/cilium-np8dt" Feb 12 19:20:53.645378 kubelet[2600]: I0212 19:20:53.645348 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/49aacb68-7686-4159-a78f-1af5d081d919-xtables-lock\") pod \"kube-proxy-htwxf\" (UID: \"49aacb68-7686-4159-a78f-1af5d081d919\") " pod="kube-system/kube-proxy-htwxf" Feb 12 19:20:53.645404 kubelet[2600]: I0212 19:20:53.645378 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-clustermesh-secrets\") pod \"cilium-np8dt\" (UID: \"d0918d1e-df63-47d5-9fe2-7a7a7dab1d43\") " pod="kube-system/cilium-np8dt" Feb 12 19:20:53.645428 kubelet[2600]: I0212 19:20:53.645407 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qphl\" (UniqueName: \"kubernetes.io/projected/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-kube-api-access-8qphl\") pod \"cilium-np8dt\" (UID: \"d0918d1e-df63-47d5-9fe2-7a7a7dab1d43\") " pod="kube-system/cilium-np8dt" Feb 12 19:20:53.645450 kubelet[2600]: I0212 19:20:53.645430 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-hostproc\") pod \"cilium-np8dt\" (UID: \"d0918d1e-df63-47d5-9fe2-7a7a7dab1d43\") " pod="kube-system/cilium-np8dt" Feb 12 19:20:53.747169 kubelet[2600]: I0212 19:20:53.746408 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7zts\" (UniqueName: \"kubernetes.io/projected/1d83b632-f389-4f26-ac21-b096cfb6251e-kube-api-access-p7zts\") pod \"cilium-operator-f59cbd8c6-r9h2d\" (UID: \"1d83b632-f389-4f26-ac21-b096cfb6251e\") " pod="kube-system/cilium-operator-f59cbd8c6-r9h2d" Feb 12 19:20:53.747468 kubelet[2600]: I0212 19:20:53.747453 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1d83b632-f389-4f26-ac21-b096cfb6251e-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-r9h2d\" (UID: \"1d83b632-f389-4f26-ac21-b096cfb6251e\") " pod="kube-system/cilium-operator-f59cbd8c6-r9h2d" Feb 12 19:20:54.825721 env[1429]: time="2024-02-12T19:20:54.825675031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-r9h2d,Uid:1d83b632-f389-4f26-ac21-b096cfb6251e,Namespace:kube-system,Attempt:0,}" Feb 12 19:20:54.865081 env[1429]: time="2024-02-12T19:20:54.864898652Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:20:54.865081 env[1429]: time="2024-02-12T19:20:54.864936940Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:20:54.865081 env[1429]: time="2024-02-12T19:20:54.864946863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:20:54.868137 env[1429]: time="2024-02-12T19:20:54.865109820Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fb91fdf617390e8cd93e3d159c3484d7a2ad99d1bb14bfec5442318de3684736 pid=2707 runtime=io.containerd.runc.v2 Feb 12 19:20:54.883003 systemd[1]: run-containerd-runc-k8s.io-fb91fdf617390e8cd93e3d159c3484d7a2ad99d1bb14bfec5442318de3684736-runc.uL2WO6.mount: Deactivated successfully. Feb 12 19:20:54.914871 env[1429]: time="2024-02-12T19:20:54.914833935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-r9h2d,Uid:1d83b632-f389-4f26-ac21-b096cfb6251e,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb91fdf617390e8cd93e3d159c3484d7a2ad99d1bb14bfec5442318de3684736\"" Feb 12 19:20:54.918212 env[1429]: time="2024-02-12T19:20:54.916889088Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 12 19:20:54.990227 env[1429]: time="2024-02-12T19:20:54.990096403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-htwxf,Uid:49aacb68-7686-4159-a78f-1af5d081d919,Namespace:kube-system,Attempt:0,}" Feb 12 19:20:55.001101 env[1429]: time="2024-02-12T19:20:55.001038119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-np8dt,Uid:d0918d1e-df63-47d5-9fe2-7a7a7dab1d43,Namespace:kube-system,Attempt:0,}" Feb 12 19:20:55.039908 env[1429]: time="2024-02-12T19:20:55.030165215Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:20:55.039908 env[1429]: time="2024-02-12T19:20:55.030260396Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:20:55.039908 env[1429]: time="2024-02-12T19:20:55.030270758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:20:55.039908 env[1429]: time="2024-02-12T19:20:55.030411510Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b014979eec0059069d920c5d46282b90a585764103350e3536efbe1c34193a21 pid=2747 runtime=io.containerd.runc.v2 Feb 12 19:20:55.049848 env[1429]: time="2024-02-12T19:20:55.049675018Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:20:55.049848 env[1429]: time="2024-02-12T19:20:55.049712227Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:20:55.049848 env[1429]: time="2024-02-12T19:20:55.049723109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:20:55.050224 env[1429]: time="2024-02-12T19:20:55.050159488Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/053139a9f850f33fbf5803ee3b0d37c89a1b85de62eb9d5b81406f3710348dea pid=2769 runtime=io.containerd.runc.v2 Feb 12 19:20:55.097531 env[1429]: time="2024-02-12T19:20:55.096871592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-np8dt,Uid:d0918d1e-df63-47d5-9fe2-7a7a7dab1d43,Namespace:kube-system,Attempt:0,} returns sandbox id \"053139a9f850f33fbf5803ee3b0d37c89a1b85de62eb9d5b81406f3710348dea\"" Feb 12 19:20:55.104781 env[1429]: time="2024-02-12T19:20:55.104737767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-htwxf,Uid:49aacb68-7686-4159-a78f-1af5d081d919,Namespace:kube-system,Attempt:0,} returns sandbox id \"b014979eec0059069d920c5d46282b90a585764103350e3536efbe1c34193a21\"" Feb 12 19:20:55.109358 env[1429]: time="2024-02-12T19:20:55.109323123Z" level=info msg="CreateContainer within sandbox \"b014979eec0059069d920c5d46282b90a585764103350e3536efbe1c34193a21\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 19:20:55.150634 env[1429]: time="2024-02-12T19:20:55.150588757Z" level=info msg="CreateContainer within sandbox \"b014979eec0059069d920c5d46282b90a585764103350e3536efbe1c34193a21\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a1f7135965a244eed544703ef86d78f73baaf346e0732b7e422a0c976757ad08\"" Feb 12 19:20:55.153612 env[1429]: time="2024-02-12T19:20:55.153338858Z" level=info msg="StartContainer for \"a1f7135965a244eed544703ef86d78f73baaf346e0732b7e422a0c976757ad08\"" Feb 12 19:20:55.212625 env[1429]: time="2024-02-12T19:20:55.212355660Z" level=info msg="StartContainer for \"a1f7135965a244eed544703ef86d78f73baaf346e0732b7e422a0c976757ad08\" returns successfully" Feb 12 19:20:56.662509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2150654501.mount: Deactivated successfully. Feb 12 19:20:57.858050 env[1429]: time="2024-02-12T19:20:57.858002892Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:57.863830 env[1429]: time="2024-02-12T19:20:57.863797353Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:57.866624 env[1429]: time="2024-02-12T19:20:57.866588721Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:57.867686 env[1429]: time="2024-02-12T19:20:57.867174968Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 12 19:20:57.869953 env[1429]: time="2024-02-12T19:20:57.869906883Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 12 19:20:57.871774 env[1429]: time="2024-02-12T19:20:57.871735641Z" level=info msg="CreateContainer within sandbox \"fb91fdf617390e8cd93e3d159c3484d7a2ad99d1bb14bfec5442318de3684736\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 12 19:20:57.897608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1583539856.mount: Deactivated successfully. Feb 12 19:20:57.912935 env[1429]: time="2024-02-12T19:20:57.912891358Z" level=info msg="CreateContainer within sandbox \"fb91fdf617390e8cd93e3d159c3484d7a2ad99d1bb14bfec5442318de3684736\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"5d5b9df1863a564439352383146cbf76d5b824836e57f9fcc7e7fede8eba4537\"" Feb 12 19:20:57.914771 env[1429]: time="2024-02-12T19:20:57.914743601Z" level=info msg="StartContainer for \"5d5b9df1863a564439352383146cbf76d5b824836e57f9fcc7e7fede8eba4537\"" Feb 12 19:20:57.973337 env[1429]: time="2024-02-12T19:20:57.973288783Z" level=info msg="StartContainer for \"5d5b9df1863a564439352383146cbf76d5b824836e57f9fcc7e7fede8eba4537\" returns successfully" Feb 12 19:20:58.209378 kubelet[2600]: I0212 19:20:58.208410 2600 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-r9h2d" podStartSLOduration=-9.223372031646404e+09 pod.CreationTimestamp="2024-02-12 19:20:53 +0000 UTC" firstStartedPulling="2024-02-12 19:20:54.916139515 +0000 UTC m=+15.064773885" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:20:58.207946776 +0000 UTC m=+18.356581106" watchObservedRunningTime="2024-02-12 19:20:58.208371947 +0000 UTC m=+18.357006277" Feb 12 19:20:58.209378 kubelet[2600]: I0212 19:20:58.208663 2600 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-htwxf" podStartSLOduration=5.208637083 pod.CreationTimestamp="2024-02-12 19:20:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:20:56.205308116 +0000 UTC m=+16.353942486" watchObservedRunningTime="2024-02-12 19:20:58.208637083 +0000 UTC m=+18.357271413" Feb 12 19:21:02.556077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1568673500.mount: Deactivated successfully. Feb 12 19:21:05.171265 env[1429]: time="2024-02-12T19:21:05.171213703Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:21:05.178155 env[1429]: time="2024-02-12T19:21:05.178100211Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:21:05.183130 env[1429]: time="2024-02-12T19:21:05.183093640Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:21:05.183849 env[1429]: time="2024-02-12T19:21:05.183820738Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 12 19:21:05.187864 env[1429]: time="2024-02-12T19:21:05.187822938Z" level=info msg="CreateContainer within sandbox \"053139a9f850f33fbf5803ee3b0d37c89a1b85de62eb9d5b81406f3710348dea\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:21:05.215386 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3228392844.mount: Deactivated successfully. Feb 12 19:21:05.237131 env[1429]: time="2024-02-12T19:21:05.237084416Z" level=info msg="CreateContainer within sandbox \"053139a9f850f33fbf5803ee3b0d37c89a1b85de62eb9d5b81406f3710348dea\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"504d6c95953bb42e344044c12154452a179ef492e9cbdbee8735dd301386cd3e\"" Feb 12 19:21:05.237875 env[1429]: time="2024-02-12T19:21:05.237851362Z" level=info msg="StartContainer for \"504d6c95953bb42e344044c12154452a179ef492e9cbdbee8735dd301386cd3e\"" Feb 12 19:21:05.287045 env[1429]: time="2024-02-12T19:21:05.286991376Z" level=info msg="StartContainer for \"504d6c95953bb42e344044c12154452a179ef492e9cbdbee8735dd301386cd3e\" returns successfully" Feb 12 19:21:06.213276 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-504d6c95953bb42e344044c12154452a179ef492e9cbdbee8735dd301386cd3e-rootfs.mount: Deactivated successfully. Feb 12 19:21:06.569487 env[1429]: time="2024-02-12T19:21:06.569154319Z" level=info msg="shim disconnected" id=504d6c95953bb42e344044c12154452a179ef492e9cbdbee8735dd301386cd3e Feb 12 19:21:06.569876 env[1429]: time="2024-02-12T19:21:06.569853329Z" level=warning msg="cleaning up after shim disconnected" id=504d6c95953bb42e344044c12154452a179ef492e9cbdbee8735dd301386cd3e namespace=k8s.io Feb 12 19:21:06.569952 env[1429]: time="2024-02-12T19:21:06.569938785Z" level=info msg="cleaning up dead shim" Feb 12 19:21:06.576836 env[1429]: time="2024-02-12T19:21:06.576798908Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:21:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3047 runtime=io.containerd.runc.v2\n" Feb 12 19:21:07.232824 env[1429]: time="2024-02-12T19:21:07.232712394Z" level=info msg="CreateContainer within sandbox \"053139a9f850f33fbf5803ee3b0d37c89a1b85de62eb9d5b81406f3710348dea\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 19:21:07.266661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount591847057.mount: Deactivated successfully. Feb 12 19:21:07.272107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2808450599.mount: Deactivated successfully. Feb 12 19:21:07.282874 env[1429]: time="2024-02-12T19:21:07.282821618Z" level=info msg="CreateContainer within sandbox \"053139a9f850f33fbf5803ee3b0d37c89a1b85de62eb9d5b81406f3710348dea\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8fe5463976c17d85f2d2a0f4bba69aa1ff9e75ac4c75ddce4d59bf9ed2ec5e4f\"" Feb 12 19:21:07.284889 env[1429]: time="2024-02-12T19:21:07.283544191Z" level=info msg="StartContainer for \"8fe5463976c17d85f2d2a0f4bba69aa1ff9e75ac4c75ddce4d59bf9ed2ec5e4f\"" Feb 12 19:21:07.331957 env[1429]: time="2024-02-12T19:21:07.331171478Z" level=info msg="StartContainer for \"8fe5463976c17d85f2d2a0f4bba69aa1ff9e75ac4c75ddce4d59bf9ed2ec5e4f\" returns successfully" Feb 12 19:21:07.341962 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:21:07.342222 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:21:07.342387 systemd[1]: Stopping systemd-sysctl.service... Feb 12 19:21:07.343887 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:21:07.360343 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:21:07.387213 env[1429]: time="2024-02-12T19:21:07.387144861Z" level=info msg="shim disconnected" id=8fe5463976c17d85f2d2a0f4bba69aa1ff9e75ac4c75ddce4d59bf9ed2ec5e4f Feb 12 19:21:07.387425 env[1429]: time="2024-02-12T19:21:07.387405629Z" level=warning msg="cleaning up after shim disconnected" id=8fe5463976c17d85f2d2a0f4bba69aa1ff9e75ac4c75ddce4d59bf9ed2ec5e4f namespace=k8s.io Feb 12 19:21:07.387487 env[1429]: time="2024-02-12T19:21:07.387475802Z" level=info msg="cleaning up dead shim" Feb 12 19:21:07.395051 env[1429]: time="2024-02-12T19:21:07.395008108Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:21:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3114 runtime=io.containerd.runc.v2\n" Feb 12 19:21:08.241333 env[1429]: time="2024-02-12T19:21:08.241297974Z" level=info msg="CreateContainer within sandbox \"053139a9f850f33fbf5803ee3b0d37c89a1b85de62eb9d5b81406f3710348dea\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 19:21:08.263624 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8fe5463976c17d85f2d2a0f4bba69aa1ff9e75ac4c75ddce4d59bf9ed2ec5e4f-rootfs.mount: Deactivated successfully. Feb 12 19:21:08.269951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4154288853.mount: Deactivated successfully. Feb 12 19:21:08.289202 env[1429]: time="2024-02-12T19:21:08.289148168Z" level=info msg="CreateContainer within sandbox \"053139a9f850f33fbf5803ee3b0d37c89a1b85de62eb9d5b81406f3710348dea\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1ddda07a74d904aa68b502fc1521f084bd9aa97c4e8a336e53e4d08fe2fc89e3\"" Feb 12 19:21:08.291622 env[1429]: time="2024-02-12T19:21:08.291589730Z" level=info msg="StartContainer for \"1ddda07a74d904aa68b502fc1521f084bd9aa97c4e8a336e53e4d08fe2fc89e3\"" Feb 12 19:21:08.350050 env[1429]: time="2024-02-12T19:21:08.349994517Z" level=info msg="StartContainer for \"1ddda07a74d904aa68b502fc1521f084bd9aa97c4e8a336e53e4d08fe2fc89e3\" returns successfully" Feb 12 19:21:08.377559 env[1429]: time="2024-02-12T19:21:08.377513265Z" level=info msg="shim disconnected" id=1ddda07a74d904aa68b502fc1521f084bd9aa97c4e8a336e53e4d08fe2fc89e3 Feb 12 19:21:08.377910 env[1429]: time="2024-02-12T19:21:08.377875891Z" level=warning msg="cleaning up after shim disconnected" id=1ddda07a74d904aa68b502fc1521f084bd9aa97c4e8a336e53e4d08fe2fc89e3 namespace=k8s.io Feb 12 19:21:08.377992 env[1429]: time="2024-02-12T19:21:08.377979510Z" level=info msg="cleaning up dead shim" Feb 12 19:21:08.385737 env[1429]: time="2024-02-12T19:21:08.385703270Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:21:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3173 runtime=io.containerd.runc.v2\n" Feb 12 19:21:09.243210 env[1429]: time="2024-02-12T19:21:09.239617688Z" level=info msg="CreateContainer within sandbox \"053139a9f850f33fbf5803ee3b0d37c89a1b85de62eb9d5b81406f3710348dea\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 19:21:09.263531 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ddda07a74d904aa68b502fc1521f084bd9aa97c4e8a336e53e4d08fe2fc89e3-rootfs.mount: Deactivated successfully. Feb 12 19:21:09.277604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3774233570.mount: Deactivated successfully. Feb 12 19:21:09.284486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount190619180.mount: Deactivated successfully. Feb 12 19:21:09.296623 env[1429]: time="2024-02-12T19:21:09.296572217Z" level=info msg="CreateContainer within sandbox \"053139a9f850f33fbf5803ee3b0d37c89a1b85de62eb9d5b81406f3710348dea\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1e114594ab181b4e55716198ac546f58b10cef3d9a0df6c57095385bd989b96f\"" Feb 12 19:21:09.298285 env[1429]: time="2024-02-12T19:21:09.298252157Z" level=info msg="StartContainer for \"1e114594ab181b4e55716198ac546f58b10cef3d9a0df6c57095385bd989b96f\"" Feb 12 19:21:09.343696 env[1429]: time="2024-02-12T19:21:09.343652743Z" level=info msg="StartContainer for \"1e114594ab181b4e55716198ac546f58b10cef3d9a0df6c57095385bd989b96f\" returns successfully" Feb 12 19:21:09.372395 env[1429]: time="2024-02-12T19:21:09.372345626Z" level=info msg="shim disconnected" id=1e114594ab181b4e55716198ac546f58b10cef3d9a0df6c57095385bd989b96f Feb 12 19:21:09.372395 env[1429]: time="2024-02-12T19:21:09.372391994Z" level=warning msg="cleaning up after shim disconnected" id=1e114594ab181b4e55716198ac546f58b10cef3d9a0df6c57095385bd989b96f namespace=k8s.io Feb 12 19:21:09.372395 env[1429]: time="2024-02-12T19:21:09.372401196Z" level=info msg="cleaning up dead shim" Feb 12 19:21:09.379322 env[1429]: time="2024-02-12T19:21:09.379276344Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:21:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3227 runtime=io.containerd.runc.v2\n" Feb 12 19:21:10.240584 env[1429]: time="2024-02-12T19:21:10.240513806Z" level=info msg="CreateContainer within sandbox \"053139a9f850f33fbf5803ee3b0d37c89a1b85de62eb9d5b81406f3710348dea\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 19:21:10.271274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1701091342.mount: Deactivated successfully. Feb 12 19:21:10.277476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount892763731.mount: Deactivated successfully. Feb 12 19:21:10.292379 env[1429]: time="2024-02-12T19:21:10.292287553Z" level=info msg="CreateContainer within sandbox \"053139a9f850f33fbf5803ee3b0d37c89a1b85de62eb9d5b81406f3710348dea\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6f7f803ef70af8ea3e4e8c233baa4077e2551d048de42b36c951f8a142f95bfb\"" Feb 12 19:21:10.294501 env[1429]: time="2024-02-12T19:21:10.294462176Z" level=info msg="StartContainer for \"6f7f803ef70af8ea3e4e8c233baa4077e2551d048de42b36c951f8a142f95bfb\"" Feb 12 19:21:10.347426 env[1429]: time="2024-02-12T19:21:10.347368643Z" level=info msg="StartContainer for \"6f7f803ef70af8ea3e4e8c233baa4077e2551d048de42b36c951f8a142f95bfb\" returns successfully" Feb 12 19:21:10.420209 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 12 19:21:10.431462 kubelet[2600]: I0212 19:21:10.431287 2600 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 19:21:10.453909 kubelet[2600]: I0212 19:21:10.453872 2600 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:21:10.458792 kubelet[2600]: I0212 19:21:10.458744 2600 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:21:10.562228 kubelet[2600]: I0212 19:21:10.562111 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7z8g\" (UniqueName: \"kubernetes.io/projected/1dc20005-51f8-4f9e-94e3-13cfb03e9dc9-kube-api-access-h7z8g\") pod \"coredns-787d4945fb-ptxhd\" (UID: \"1dc20005-51f8-4f9e-94e3-13cfb03e9dc9\") " pod="kube-system/coredns-787d4945fb-ptxhd" Feb 12 19:21:10.562228 kubelet[2600]: I0212 19:21:10.562173 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b97b5fde-8b08-4a27-bd3c-b1f3ba1747b5-config-volume\") pod \"coredns-787d4945fb-v9pdf\" (UID: \"b97b5fde-8b08-4a27-bd3c-b1f3ba1747b5\") " pod="kube-system/coredns-787d4945fb-v9pdf" Feb 12 19:21:10.562228 kubelet[2600]: I0212 19:21:10.562213 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1dc20005-51f8-4f9e-94e3-13cfb03e9dc9-config-volume\") pod \"coredns-787d4945fb-ptxhd\" (UID: \"1dc20005-51f8-4f9e-94e3-13cfb03e9dc9\") " pod="kube-system/coredns-787d4945fb-ptxhd" Feb 12 19:21:10.562421 kubelet[2600]: I0212 19:21:10.562238 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt7rt\" (UniqueName: \"kubernetes.io/projected/b97b5fde-8b08-4a27-bd3c-b1f3ba1747b5-kube-api-access-nt7rt\") pod \"coredns-787d4945fb-v9pdf\" (UID: \"b97b5fde-8b08-4a27-bd3c-b1f3ba1747b5\") " pod="kube-system/coredns-787d4945fb-v9pdf" Feb 12 19:21:10.757563 env[1429]: time="2024-02-12T19:21:10.757521554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-ptxhd,Uid:1dc20005-51f8-4f9e-94e3-13cfb03e9dc9,Namespace:kube-system,Attempt:0,}" Feb 12 19:21:10.762133 env[1429]: time="2024-02-12T19:21:10.762100039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-v9pdf,Uid:b97b5fde-8b08-4a27-bd3c-b1f3ba1747b5,Namespace:kube-system,Attempt:0,}" Feb 12 19:21:10.810213 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 12 19:21:12.446741 systemd-networkd[1604]: cilium_host: Link UP Feb 12 19:21:12.446847 systemd-networkd[1604]: cilium_net: Link UP Feb 12 19:21:12.446850 systemd-networkd[1604]: cilium_net: Gained carrier Feb 12 19:21:12.446957 systemd-networkd[1604]: cilium_host: Gained carrier Feb 12 19:21:12.453066 systemd-networkd[1604]: cilium_host: Gained IPv6LL Feb 12 19:21:12.453240 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 12 19:21:12.648296 systemd-networkd[1604]: cilium_vxlan: Link UP Feb 12 19:21:12.648305 systemd-networkd[1604]: cilium_vxlan: Gained carrier Feb 12 19:21:12.891216 kernel: NET: Registered PF_ALG protocol family Feb 12 19:21:13.453362 systemd-networkd[1604]: cilium_net: Gained IPv6LL Feb 12 19:21:13.590727 systemd-networkd[1604]: lxc_health: Link UP Feb 12 19:21:13.609055 systemd-networkd[1604]: lxc_health: Gained carrier Feb 12 19:21:13.609682 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 19:21:13.851945 systemd-networkd[1604]: lxc86ab5091ad2a: Link UP Feb 12 19:21:13.864229 kernel: eth0: renamed from tmp1b2c6 Feb 12 19:21:13.871381 systemd-networkd[1604]: lxc9c5fe44263c6: Link UP Feb 12 19:21:13.880229 kernel: eth0: renamed from tmpca170 Feb 12 19:21:13.898342 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc86ab5091ad2a: link becomes ready Feb 12 19:21:13.895021 systemd-networkd[1604]: lxc86ab5091ad2a: Gained carrier Feb 12 19:21:13.913151 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc9c5fe44263c6: link becomes ready Feb 12 19:21:13.914085 systemd-networkd[1604]: lxc9c5fe44263c6: Gained carrier Feb 12 19:21:14.541325 systemd-networkd[1604]: cilium_vxlan: Gained IPv6LL Feb 12 19:21:14.989310 systemd-networkd[1604]: lxc9c5fe44263c6: Gained IPv6LL Feb 12 19:21:15.017686 kubelet[2600]: I0212 19:21:15.017657 2600 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-np8dt" podStartSLOduration=-9.223372014837173e+09 pod.CreationTimestamp="2024-02-12 19:20:53 +0000 UTC" firstStartedPulling="2024-02-12 19:20:55.098737333 +0000 UTC m=+15.247371703" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:21:11.256839138 +0000 UTC m=+31.405473508" watchObservedRunningTime="2024-02-12 19:21:15.017602783 +0000 UTC m=+35.166237113" Feb 12 19:21:15.501300 systemd-networkd[1604]: lxc_health: Gained IPv6LL Feb 12 19:21:15.629294 systemd-networkd[1604]: lxc86ab5091ad2a: Gained IPv6LL Feb 12 19:21:17.557446 env[1429]: time="2024-02-12T19:21:17.557369474Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:21:17.557446 env[1429]: time="2024-02-12T19:21:17.557407800Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:21:17.557446 env[1429]: time="2024-02-12T19:21:17.557418762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:21:17.558075 env[1429]: time="2024-02-12T19:21:17.558021178Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1b2c663d5aadb6569fd4d9cccec03d859dfdeed07d607c08ad02ba2d2c0f1903 pid=3770 runtime=io.containerd.runc.v2 Feb 12 19:21:17.564731 env[1429]: time="2024-02-12T19:21:17.564661838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:21:17.564837 env[1429]: time="2024-02-12T19:21:17.564732770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:21:17.564837 env[1429]: time="2024-02-12T19:21:17.564761014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:21:17.564939 env[1429]: time="2024-02-12T19:21:17.564900996Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ca170533180ea310b7f0db8ddefd9696956aaa85fe12cd32ab76ddb3371d3b6e pid=3783 runtime=io.containerd.runc.v2 Feb 12 19:21:17.641112 env[1429]: time="2024-02-12T19:21:17.641058394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-v9pdf,Uid:b97b5fde-8b08-4a27-bd3c-b1f3ba1747b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca170533180ea310b7f0db8ddefd9696956aaa85fe12cd32ab76ddb3371d3b6e\"" Feb 12 19:21:17.646338 env[1429]: time="2024-02-12T19:21:17.645700815Z" level=info msg="CreateContainer within sandbox \"ca170533180ea310b7f0db8ddefd9696956aaa85fe12cd32ab76ddb3371d3b6e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 19:21:17.667216 env[1429]: time="2024-02-12T19:21:17.667150800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-ptxhd,Uid:1dc20005-51f8-4f9e-94e3-13cfb03e9dc9,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b2c663d5aadb6569fd4d9cccec03d859dfdeed07d607c08ad02ba2d2c0f1903\"" Feb 12 19:21:17.673846 env[1429]: time="2024-02-12T19:21:17.673793060Z" level=info msg="CreateContainer within sandbox \"1b2c663d5aadb6569fd4d9cccec03d859dfdeed07d607c08ad02ba2d2c0f1903\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 19:21:17.684696 env[1429]: time="2024-02-12T19:21:17.684624629Z" level=info msg="CreateContainer within sandbox \"ca170533180ea310b7f0db8ddefd9696956aaa85fe12cd32ab76ddb3371d3b6e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b88782db36997e46240c0129e7307935b2285bbcc78ca4b17d4e95bb8050a6b6\"" Feb 12 19:21:17.687306 env[1429]: time="2024-02-12T19:21:17.687246808Z" level=info msg="StartContainer for \"b88782db36997e46240c0129e7307935b2285bbcc78ca4b17d4e95bb8050a6b6\"" Feb 12 19:21:17.727116 env[1429]: time="2024-02-12T19:21:17.727054562Z" level=info msg="CreateContainer within sandbox \"1b2c663d5aadb6569fd4d9cccec03d859dfdeed07d607c08ad02ba2d2c0f1903\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3da34a0724869641538c06150be1d76ae1a582684ace55f3f1ca56c81df70e71\"" Feb 12 19:21:17.729830 env[1429]: time="2024-02-12T19:21:17.729782158Z" level=info msg="StartContainer for \"3da34a0724869641538c06150be1d76ae1a582684ace55f3f1ca56c81df70e71\"" Feb 12 19:21:17.769361 env[1429]: time="2024-02-12T19:21:17.769313789Z" level=info msg="StartContainer for \"b88782db36997e46240c0129e7307935b2285bbcc78ca4b17d4e95bb8050a6b6\" returns successfully" Feb 12 19:21:17.845753 env[1429]: time="2024-02-12T19:21:17.845700263Z" level=info msg="StartContainer for \"3da34a0724869641538c06150be1d76ae1a582684ace55f3f1ca56c81df70e71\" returns successfully" Feb 12 19:21:18.278707 kubelet[2600]: I0212 19:21:18.278612 2600 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-ptxhd" podStartSLOduration=25.278576038 pod.CreationTimestamp="2024-02-12 19:20:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:21:18.266930842 +0000 UTC m=+38.415565213" watchObservedRunningTime="2024-02-12 19:21:18.278576038 +0000 UTC m=+38.427210408" Feb 12 19:21:18.279344 kubelet[2600]: I0212 19:21:18.279119 2600 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-v9pdf" podStartSLOduration=25.279094879 pod.CreationTimestamp="2024-02-12 19:20:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:21:18.277856244 +0000 UTC m=+38.426490614" watchObservedRunningTime="2024-02-12 19:21:18.279094879 +0000 UTC m=+38.427729249" Feb 12 19:23:47.378403 systemd[1]: Started sshd@5-10.200.20.34:22-10.200.12.6:53024.service. Feb 12 19:23:47.797660 sshd[4014]: Accepted publickey for core from 10.200.12.6 port 53024 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:23:47.799341 sshd[4014]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:23:47.803786 systemd[1]: Started session-8.scope. Feb 12 19:23:47.804287 systemd-logind[1416]: New session 8 of user core. Feb 12 19:23:48.251493 sshd[4014]: pam_unix(sshd:session): session closed for user core Feb 12 19:23:48.254531 systemd-logind[1416]: Session 8 logged out. Waiting for processes to exit. Feb 12 19:23:48.254686 systemd[1]: sshd@5-10.200.20.34:22-10.200.12.6:53024.service: Deactivated successfully. Feb 12 19:23:48.255588 systemd[1]: session-8.scope: Deactivated successfully. Feb 12 19:23:48.256038 systemd-logind[1416]: Removed session 8. Feb 12 19:23:53.325730 systemd[1]: Started sshd@6-10.200.20.34:22-10.200.12.6:53034.service. Feb 12 19:23:53.775162 sshd[4029]: Accepted publickey for core from 10.200.12.6 port 53034 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:23:53.776648 sshd[4029]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:23:53.781814 systemd[1]: Started session-9.scope. Feb 12 19:23:53.782906 systemd-logind[1416]: New session 9 of user core. Feb 12 19:23:54.152636 sshd[4029]: pam_unix(sshd:session): session closed for user core Feb 12 19:23:54.155096 systemd[1]: sshd@6-10.200.20.34:22-10.200.12.6:53034.service: Deactivated successfully. Feb 12 19:23:54.156166 systemd[1]: session-9.scope: Deactivated successfully. Feb 12 19:23:54.156547 systemd-logind[1416]: Session 9 logged out. Waiting for processes to exit. Feb 12 19:23:54.157533 systemd-logind[1416]: Removed session 9. Feb 12 19:23:59.222294 systemd[1]: Started sshd@7-10.200.20.34:22-10.200.12.6:44578.service. Feb 12 19:23:59.647083 sshd[4048]: Accepted publickey for core from 10.200.12.6 port 44578 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:23:59.648770 sshd[4048]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:23:59.653413 systemd[1]: Started session-10.scope. Feb 12 19:23:59.653590 systemd-logind[1416]: New session 10 of user core. Feb 12 19:24:00.010989 sshd[4048]: pam_unix(sshd:session): session closed for user core Feb 12 19:24:00.014183 systemd[1]: sshd@7-10.200.20.34:22-10.200.12.6:44578.service: Deactivated successfully. Feb 12 19:24:00.015289 systemd[1]: session-10.scope: Deactivated successfully. Feb 12 19:24:00.015845 systemd-logind[1416]: Session 10 logged out. Waiting for processes to exit. Feb 12 19:24:00.016685 systemd-logind[1416]: Removed session 10. Feb 12 19:24:00.679922 update_engine[1419]: I0212 19:24:00.679601 1419 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 12 19:24:00.679922 update_engine[1419]: I0212 19:24:00.679634 1419 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 12 19:24:00.679922 update_engine[1419]: I0212 19:24:00.679750 1419 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 12 19:24:00.680812 update_engine[1419]: I0212 19:24:00.680562 1419 omaha_request_params.cc:62] Current group set to lts Feb 12 19:24:00.680812 update_engine[1419]: I0212 19:24:00.680661 1419 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 12 19:24:00.680812 update_engine[1419]: I0212 19:24:00.680666 1419 update_attempter.cc:643] Scheduling an action processor start. Feb 12 19:24:00.680812 update_engine[1419]: I0212 19:24:00.680681 1419 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 12 19:24:00.680812 update_engine[1419]: I0212 19:24:00.680703 1419 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 12 19:24:00.681013 locksmithd[1518]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 12 19:24:00.719222 update_engine[1419]: I0212 19:24:00.718369 1419 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 12 19:24:00.719222 update_engine[1419]: I0212 19:24:00.718394 1419 omaha_request_action.cc:271] Request: Feb 12 19:24:00.719222 update_engine[1419]: Feb 12 19:24:00.719222 update_engine[1419]: Feb 12 19:24:00.719222 update_engine[1419]: Feb 12 19:24:00.719222 update_engine[1419]: Feb 12 19:24:00.719222 update_engine[1419]: Feb 12 19:24:00.719222 update_engine[1419]: Feb 12 19:24:00.719222 update_engine[1419]: Feb 12 19:24:00.719222 update_engine[1419]: Feb 12 19:24:00.719222 update_engine[1419]: I0212 19:24:00.718400 1419 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 12 19:24:00.719801 update_engine[1419]: I0212 19:24:00.719566 1419 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 12 19:24:00.719801 update_engine[1419]: I0212 19:24:00.719761 1419 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 12 19:24:00.835137 update_engine[1419]: E0212 19:24:00.835006 1419 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 12 19:24:00.835137 update_engine[1419]: I0212 19:24:00.835106 1419 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 12 19:24:05.081569 systemd[1]: Started sshd@8-10.200.20.34:22-10.200.12.6:44592.service. Feb 12 19:24:05.507218 sshd[4062]: Accepted publickey for core from 10.200.12.6 port 44592 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:24:05.508790 sshd[4062]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:24:05.512944 systemd[1]: Started session-11.scope. Feb 12 19:24:05.513141 systemd-logind[1416]: New session 11 of user core. Feb 12 19:24:05.872393 sshd[4062]: pam_unix(sshd:session): session closed for user core Feb 12 19:24:05.877404 systemd-logind[1416]: Session 11 logged out. Waiting for processes to exit. Feb 12 19:24:05.877545 systemd[1]: sshd@8-10.200.20.34:22-10.200.12.6:44592.service: Deactivated successfully. Feb 12 19:24:05.878430 systemd[1]: session-11.scope: Deactivated successfully. Feb 12 19:24:05.878943 systemd-logind[1416]: Removed session 11. Feb 12 19:24:05.939755 systemd[1]: Started sshd@9-10.200.20.34:22-10.200.12.6:44604.service. Feb 12 19:24:06.359319 sshd[4075]: Accepted publickey for core from 10.200.12.6 port 44604 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:24:06.360697 sshd[4075]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:24:06.364979 systemd-logind[1416]: New session 12 of user core. Feb 12 19:24:06.365501 systemd[1]: Started session-12.scope. Feb 12 19:24:07.450894 sshd[4075]: pam_unix(sshd:session): session closed for user core Feb 12 19:24:07.453238 systemd[1]: sshd@9-10.200.20.34:22-10.200.12.6:44604.service: Deactivated successfully. Feb 12 19:24:07.454327 systemd[1]: session-12.scope: Deactivated successfully. Feb 12 19:24:07.454699 systemd-logind[1416]: Session 12 logged out. Waiting for processes to exit. Feb 12 19:24:07.455455 systemd-logind[1416]: Removed session 12. Feb 12 19:24:07.522478 systemd[1]: Started sshd@10-10.200.20.34:22-10.200.12.6:39420.service. Feb 12 19:24:07.937605 sshd[4086]: Accepted publickey for core from 10.200.12.6 port 39420 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:24:07.938929 sshd[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:24:07.943449 systemd[1]: Started session-13.scope. Feb 12 19:24:07.944279 systemd-logind[1416]: New session 13 of user core. Feb 12 19:24:08.295153 sshd[4086]: pam_unix(sshd:session): session closed for user core Feb 12 19:24:08.297897 systemd[1]: sshd@10-10.200.20.34:22-10.200.12.6:39420.service: Deactivated successfully. Feb 12 19:24:08.299162 systemd[1]: session-13.scope: Deactivated successfully. Feb 12 19:24:08.299637 systemd-logind[1416]: Session 13 logged out. Waiting for processes to exit. Feb 12 19:24:08.300520 systemd-logind[1416]: Removed session 13. Feb 12 19:24:10.681562 update_engine[1419]: I0212 19:24:10.681380 1419 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 12 19:24:10.681894 update_engine[1419]: I0212 19:24:10.681592 1419 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 12 19:24:10.681894 update_engine[1419]: I0212 19:24:10.681773 1419 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 12 19:24:10.720232 update_engine[1419]: E0212 19:24:10.720180 1419 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 12 19:24:10.720341 update_engine[1419]: I0212 19:24:10.720306 1419 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 12 19:24:13.363455 systemd[1]: Started sshd@11-10.200.20.34:22-10.200.12.6:39426.service. Feb 12 19:24:13.779395 sshd[4099]: Accepted publickey for core from 10.200.12.6 port 39426 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:24:13.780944 sshd[4099]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:24:13.785280 systemd[1]: Started session-14.scope. Feb 12 19:24:13.786360 systemd-logind[1416]: New session 14 of user core. Feb 12 19:24:14.135280 sshd[4099]: pam_unix(sshd:session): session closed for user core Feb 12 19:24:14.137717 systemd[1]: sshd@11-10.200.20.34:22-10.200.12.6:39426.service: Deactivated successfully. Feb 12 19:24:14.138768 systemd[1]: session-14.scope: Deactivated successfully. Feb 12 19:24:14.139335 systemd-logind[1416]: Session 14 logged out. Waiting for processes to exit. Feb 12 19:24:14.140159 systemd-logind[1416]: Removed session 14. Feb 12 19:24:19.209084 systemd[1]: Started sshd@12-10.200.20.34:22-10.200.12.6:40630.service. Feb 12 19:24:19.659689 sshd[4116]: Accepted publickey for core from 10.200.12.6 port 40630 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:24:19.661377 sshd[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:24:19.665321 systemd-logind[1416]: New session 15 of user core. Feb 12 19:24:19.665973 systemd[1]: Started session-15.scope. Feb 12 19:24:20.046397 sshd[4116]: pam_unix(sshd:session): session closed for user core Feb 12 19:24:20.049523 systemd-logind[1416]: Session 15 logged out. Waiting for processes to exit. Feb 12 19:24:20.050292 systemd[1]: sshd@12-10.200.20.34:22-10.200.12.6:40630.service: Deactivated successfully. Feb 12 19:24:20.051104 systemd[1]: session-15.scope: Deactivated successfully. Feb 12 19:24:20.051873 systemd-logind[1416]: Removed session 15. Feb 12 19:24:20.114556 systemd[1]: Started sshd@13-10.200.20.34:22-10.200.12.6:40638.service. Feb 12 19:24:20.534922 sshd[4129]: Accepted publickey for core from 10.200.12.6 port 40638 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:24:20.537289 sshd[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:24:20.543467 systemd-logind[1416]: New session 16 of user core. Feb 12 19:24:20.543854 systemd[1]: Started session-16.scope. Feb 12 19:24:20.679214 update_engine[1419]: I0212 19:24:20.679134 1419 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 12 19:24:20.679549 update_engine[1419]: I0212 19:24:20.679338 1419 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 12 19:24:20.679549 update_engine[1419]: I0212 19:24:20.679515 1419 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 12 19:24:20.699509 update_engine[1419]: E0212 19:24:20.699481 1419 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 12 19:24:20.699602 update_engine[1419]: I0212 19:24:20.699573 1419 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 12 19:24:20.926293 sshd[4129]: pam_unix(sshd:session): session closed for user core Feb 12 19:24:20.929207 systemd[1]: sshd@13-10.200.20.34:22-10.200.12.6:40638.service: Deactivated successfully. Feb 12 19:24:20.930597 systemd[1]: session-16.scope: Deactivated successfully. Feb 12 19:24:20.933244 systemd-logind[1416]: Session 16 logged out. Waiting for processes to exit. Feb 12 19:24:20.935948 systemd-logind[1416]: Removed session 16. Feb 12 19:24:20.994419 systemd[1]: Started sshd@14-10.200.20.34:22-10.200.12.6:40652.service. Feb 12 19:24:21.416411 sshd[4139]: Accepted publickey for core from 10.200.12.6 port 40652 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:24:21.419067 sshd[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:24:21.424092 systemd[1]: Started session-17.scope. Feb 12 19:24:21.424324 systemd-logind[1416]: New session 17 of user core. Feb 12 19:24:22.466820 sshd[4139]: pam_unix(sshd:session): session closed for user core Feb 12 19:24:22.469909 systemd[1]: sshd@14-10.200.20.34:22-10.200.12.6:40652.service: Deactivated successfully. Feb 12 19:24:22.470839 systemd-logind[1416]: Session 17 logged out. Waiting for processes to exit. Feb 12 19:24:22.471179 systemd[1]: session-17.scope: Deactivated successfully. Feb 12 19:24:22.472293 systemd-logind[1416]: Removed session 17. Feb 12 19:24:22.540212 systemd[1]: Started sshd@15-10.200.20.34:22-10.200.12.6:40658.service. Feb 12 19:24:22.990254 sshd[4205]: Accepted publickey for core from 10.200.12.6 port 40658 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:24:22.991508 sshd[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:24:22.995242 systemd-logind[1416]: New session 18 of user core. Feb 12 19:24:22.995833 systemd[1]: Started session-18.scope. Feb 12 19:24:23.460420 sshd[4205]: pam_unix(sshd:session): session closed for user core Feb 12 19:24:23.463024 systemd[1]: sshd@15-10.200.20.34:22-10.200.12.6:40658.service: Deactivated successfully. Feb 12 19:24:23.464124 systemd[1]: session-18.scope: Deactivated successfully. Feb 12 19:24:23.464155 systemd-logind[1416]: Session 18 logged out. Waiting for processes to exit. Feb 12 19:24:23.465675 systemd-logind[1416]: Removed session 18. Feb 12 19:24:23.528684 systemd[1]: Started sshd@16-10.200.20.34:22-10.200.12.6:40670.service. Feb 12 19:24:23.950610 sshd[4215]: Accepted publickey for core from 10.200.12.6 port 40670 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:24:23.952219 sshd[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:24:23.956711 systemd[1]: Started session-19.scope. Feb 12 19:24:23.957235 systemd-logind[1416]: New session 19 of user core. Feb 12 19:24:24.307237 sshd[4215]: pam_unix(sshd:session): session closed for user core Feb 12 19:24:24.310104 systemd[1]: sshd@16-10.200.20.34:22-10.200.12.6:40670.service: Deactivated successfully. Feb 12 19:24:24.310287 systemd-logind[1416]: Session 19 logged out. Waiting for processes to exit. Feb 12 19:24:24.310923 systemd[1]: session-19.scope: Deactivated successfully. Feb 12 19:24:24.311514 systemd-logind[1416]: Removed session 19. Feb 12 19:24:29.376268 systemd[1]: Started sshd@17-10.200.20.34:22-10.200.12.6:39806.service. Feb 12 19:24:29.798127 sshd[4257]: Accepted publickey for core from 10.200.12.6 port 39806 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:24:29.799722 sshd[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:24:29.804850 systemd[1]: Started session-20.scope. Feb 12 19:24:29.805340 systemd-logind[1416]: New session 20 of user core. Feb 12 19:24:30.154802 sshd[4257]: pam_unix(sshd:session): session closed for user core Feb 12 19:24:30.157495 systemd-logind[1416]: Session 20 logged out. Waiting for processes to exit. Feb 12 19:24:30.158756 systemd[1]: sshd@17-10.200.20.34:22-10.200.12.6:39806.service: Deactivated successfully. Feb 12 19:24:30.159600 systemd[1]: session-20.scope: Deactivated successfully. Feb 12 19:24:30.160592 systemd-logind[1416]: Removed session 20. Feb 12 19:24:30.679367 update_engine[1419]: I0212 19:24:30.679323 1419 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 12 19:24:30.679693 update_engine[1419]: I0212 19:24:30.679509 1419 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 12 19:24:30.679693 update_engine[1419]: I0212 19:24:30.679684 1419 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 12 19:24:30.751696 update_engine[1419]: E0212 19:24:30.751660 1419 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 12 19:24:30.751835 update_engine[1419]: I0212 19:24:30.751762 1419 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 12 19:24:30.751835 update_engine[1419]: I0212 19:24:30.751770 1419 omaha_request_action.cc:621] Omaha request response: Feb 12 19:24:30.751881 update_engine[1419]: E0212 19:24:30.751843 1419 omaha_request_action.cc:640] Omaha request network transfer failed. Feb 12 19:24:30.751881 update_engine[1419]: I0212 19:24:30.751856 1419 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 12 19:24:30.751881 update_engine[1419]: I0212 19:24:30.751859 1419 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 12 19:24:30.751881 update_engine[1419]: I0212 19:24:30.751863 1419 update_attempter.cc:306] Processing Done. Feb 12 19:24:30.751881 update_engine[1419]: E0212 19:24:30.751873 1419 update_attempter.cc:619] Update failed. Feb 12 19:24:30.751881 update_engine[1419]: I0212 19:24:30.751876 1419 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 12 19:24:30.751881 update_engine[1419]: I0212 19:24:30.751879 1419 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 12 19:24:30.751881 update_engine[1419]: I0212 19:24:30.751883 1419 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 12 19:24:30.752388 update_engine[1419]: I0212 19:24:30.751966 1419 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 12 19:24:30.752388 update_engine[1419]: I0212 19:24:30.751986 1419 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 12 19:24:30.752388 update_engine[1419]: I0212 19:24:30.751989 1419 omaha_request_action.cc:271] Request: Feb 12 19:24:30.752388 update_engine[1419]: Feb 12 19:24:30.752388 update_engine[1419]: Feb 12 19:24:30.752388 update_engine[1419]: Feb 12 19:24:30.752388 update_engine[1419]: Feb 12 19:24:30.752388 update_engine[1419]: Feb 12 19:24:30.752388 update_engine[1419]: Feb 12 19:24:30.752388 update_engine[1419]: I0212 19:24:30.751993 1419 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 12 19:24:30.752388 update_engine[1419]: I0212 19:24:30.752118 1419 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 12 19:24:30.752388 update_engine[1419]: I0212 19:24:30.752329 1419 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 12 19:24:30.752608 locksmithd[1518]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 12 19:24:30.763261 update_engine[1419]: E0212 19:24:30.763213 1419 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 12 19:24:30.763390 update_engine[1419]: I0212 19:24:30.763346 1419 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 12 19:24:30.763390 update_engine[1419]: I0212 19:24:30.763355 1419 omaha_request_action.cc:621] Omaha request response: Feb 12 19:24:30.763390 update_engine[1419]: I0212 19:24:30.763360 1419 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 12 19:24:30.763390 update_engine[1419]: I0212 19:24:30.763363 1419 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 12 19:24:30.763390 update_engine[1419]: I0212 19:24:30.763365 1419 update_attempter.cc:306] Processing Done. Feb 12 19:24:30.763390 update_engine[1419]: I0212 19:24:30.763370 1419 update_attempter.cc:310] Error event sent. Feb 12 19:24:30.763390 update_engine[1419]: I0212 19:24:30.763378 1419 update_check_scheduler.cc:74] Next update check in 47m47s Feb 12 19:24:30.763898 locksmithd[1518]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 12 19:24:35.224401 systemd[1]: Started sshd@18-10.200.20.34:22-10.200.12.6:39820.service. Feb 12 19:24:35.645667 sshd[4271]: Accepted publickey for core from 10.200.12.6 port 39820 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:24:35.647315 sshd[4271]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:24:35.651109 systemd-logind[1416]: New session 21 of user core. Feb 12 19:24:35.651587 systemd[1]: Started session-21.scope. Feb 12 19:24:36.011396 sshd[4271]: pam_unix(sshd:session): session closed for user core Feb 12 19:24:36.014095 systemd-logind[1416]: Session 21 logged out. Waiting for processes to exit. Feb 12 19:24:36.014342 systemd[1]: sshd@18-10.200.20.34:22-10.200.12.6:39820.service: Deactivated successfully. Feb 12 19:24:36.015166 systemd[1]: session-21.scope: Deactivated successfully. Feb 12 19:24:36.015782 systemd-logind[1416]: Removed session 21. Feb 12 19:24:41.079891 systemd[1]: Started sshd@19-10.200.20.34:22-10.200.12.6:32948.service. Feb 12 19:24:41.495466 sshd[4285]: Accepted publickey for core from 10.200.12.6 port 32948 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:24:41.496992 sshd[4285]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:24:41.501266 systemd[1]: Started session-22.scope. Feb 12 19:24:41.502295 systemd-logind[1416]: New session 22 of user core. Feb 12 19:24:41.848879 sshd[4285]: pam_unix(sshd:session): session closed for user core Feb 12 19:24:41.851365 systemd[1]: sshd@19-10.200.20.34:22-10.200.12.6:32948.service: Deactivated successfully. Feb 12 19:24:41.852446 systemd-logind[1416]: Session 22 logged out. Waiting for processes to exit. Feb 12 19:24:41.852511 systemd[1]: session-22.scope: Deactivated successfully. Feb 12 19:24:41.853615 systemd-logind[1416]: Removed session 22. Feb 12 19:24:41.921480 systemd[1]: Started sshd@20-10.200.20.34:22-10.200.12.6:32952.service. Feb 12 19:24:42.371887 sshd[4298]: Accepted publickey for core from 10.200.12.6 port 32952 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:24:42.373181 sshd[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:24:42.377183 systemd-logind[1416]: New session 23 of user core. Feb 12 19:24:42.377654 systemd[1]: Started session-23.scope. Feb 12 19:24:44.241682 systemd[1]: run-containerd-runc-k8s.io-6f7f803ef70af8ea3e4e8c233baa4077e2551d048de42b36c951f8a142f95bfb-runc.SVkiSr.mount: Deactivated successfully. Feb 12 19:24:44.249401 env[1429]: time="2024-02-12T19:24:44.249367405Z" level=info msg="StopContainer for \"5d5b9df1863a564439352383146cbf76d5b824836e57f9fcc7e7fede8eba4537\" with timeout 30 (s)" Feb 12 19:24:44.250110 env[1429]: time="2024-02-12T19:24:44.250077634Z" level=info msg="Stop container \"5d5b9df1863a564439352383146cbf76d5b824836e57f9fcc7e7fede8eba4537\" with signal terminated" Feb 12 19:24:44.260768 env[1429]: time="2024-02-12T19:24:44.260718268Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:24:44.268882 env[1429]: time="2024-02-12T19:24:44.268846262Z" level=info msg="StopContainer for \"6f7f803ef70af8ea3e4e8c233baa4077e2551d048de42b36c951f8a142f95bfb\" with timeout 1 (s)" Feb 12 19:24:44.269317 env[1429]: time="2024-02-12T19:24:44.269288615Z" level=info msg="Stop container \"6f7f803ef70af8ea3e4e8c233baa4077e2551d048de42b36c951f8a142f95bfb\" with signal terminated" Feb 12 19:24:44.282690 systemd-networkd[1604]: lxc_health: Link DOWN Feb 12 19:24:44.282695 systemd-networkd[1604]: lxc_health: Lost carrier Feb 12 19:24:44.293879 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d5b9df1863a564439352383146cbf76d5b824836e57f9fcc7e7fede8eba4537-rootfs.mount: Deactivated successfully. Feb 12 19:24:44.328718 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f7f803ef70af8ea3e4e8c233baa4077e2551d048de42b36c951f8a142f95bfb-rootfs.mount: Deactivated successfully. Feb 12 19:24:44.365892 env[1429]: time="2024-02-12T19:24:44.365840109Z" level=info msg="shim disconnected" id=5d5b9df1863a564439352383146cbf76d5b824836e57f9fcc7e7fede8eba4537 Feb 12 19:24:44.365892 env[1429]: time="2024-02-12T19:24:44.365888309Z" level=warning msg="cleaning up after shim disconnected" id=5d5b9df1863a564439352383146cbf76d5b824836e57f9fcc7e7fede8eba4537 namespace=k8s.io Feb 12 19:24:44.365892 env[1429]: time="2024-02-12T19:24:44.365897229Z" level=info msg="cleaning up dead shim" Feb 12 19:24:44.366367 env[1429]: time="2024-02-12T19:24:44.366338902Z" level=info msg="shim disconnected" id=6f7f803ef70af8ea3e4e8c233baa4077e2551d048de42b36c951f8a142f95bfb Feb 12 19:24:44.366551 env[1429]: time="2024-02-12T19:24:44.366531099Z" level=warning msg="cleaning up after shim disconnected" id=6f7f803ef70af8ea3e4e8c233baa4077e2551d048de42b36c951f8a142f95bfb namespace=k8s.io Feb 12 19:24:44.366636 env[1429]: time="2024-02-12T19:24:44.366622297Z" level=info msg="cleaning up dead shim" Feb 12 19:24:44.373595 env[1429]: time="2024-02-12T19:24:44.373547189Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:24:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4366 runtime=io.containerd.runc.v2\n" Feb 12 19:24:44.379613 env[1429]: time="2024-02-12T19:24:44.379067103Z" level=info msg="StopContainer for \"5d5b9df1863a564439352383146cbf76d5b824836e57f9fcc7e7fede8eba4537\" returns successfully" Feb 12 19:24:44.380993 env[1429]: time="2024-02-12T19:24:44.380967754Z" level=info msg="StopPodSandbox for \"fb91fdf617390e8cd93e3d159c3484d7a2ad99d1bb14bfec5442318de3684736\"" Feb 12 19:24:44.381175 env[1429]: time="2024-02-12T19:24:44.381155831Z" level=info msg="Container to stop \"5d5b9df1863a564439352383146cbf76d5b824836e57f9fcc7e7fede8eba4537\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:24:44.384699 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fb91fdf617390e8cd93e3d159c3484d7a2ad99d1bb14bfec5442318de3684736-shm.mount: Deactivated successfully. Feb 12 19:24:44.385891 env[1429]: time="2024-02-12T19:24:44.385862037Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:24:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4367 runtime=io.containerd.runc.v2\n" Feb 12 19:24:44.390914 env[1429]: time="2024-02-12T19:24:44.390873599Z" level=info msg="StopContainer for \"6f7f803ef70af8ea3e4e8c233baa4077e2551d048de42b36c951f8a142f95bfb\" returns successfully" Feb 12 19:24:44.391483 env[1429]: time="2024-02-12T19:24:44.391458510Z" level=info msg="StopPodSandbox for \"053139a9f850f33fbf5803ee3b0d37c89a1b85de62eb9d5b81406f3710348dea\"" Feb 12 19:24:44.391652 env[1429]: time="2024-02-12T19:24:44.391629867Z" level=info msg="Container to stop \"504d6c95953bb42e344044c12154452a179ef492e9cbdbee8735dd301386cd3e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:24:44.391720 env[1429]: time="2024-02-12T19:24:44.391705106Z" level=info msg="Container to stop \"1ddda07a74d904aa68b502fc1521f084bd9aa97c4e8a336e53e4d08fe2fc89e3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:24:44.391782 env[1429]: time="2024-02-12T19:24:44.391766145Z" level=info msg="Container to stop \"1e114594ab181b4e55716198ac546f58b10cef3d9a0df6c57095385bd989b96f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:24:44.391928 env[1429]: time="2024-02-12T19:24:44.391908023Z" level=info msg="Container to stop \"6f7f803ef70af8ea3e4e8c233baa4077e2551d048de42b36c951f8a142f95bfb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:24:44.392027 env[1429]: time="2024-02-12T19:24:44.392011501Z" level=info msg="Container to stop \"8fe5463976c17d85f2d2a0f4bba69aa1ff9e75ac4c75ddce4d59bf9ed2ec5e4f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:24:44.430514 env[1429]: time="2024-02-12T19:24:44.429972150Z" level=info msg="shim disconnected" id=fb91fdf617390e8cd93e3d159c3484d7a2ad99d1bb14bfec5442318de3684736 Feb 12 19:24:44.431972 env[1429]: time="2024-02-12T19:24:44.431941759Z" level=warning msg="cleaning up after shim disconnected" id=fb91fdf617390e8cd93e3d159c3484d7a2ad99d1bb14bfec5442318de3684736 namespace=k8s.io Feb 12 19:24:44.432098 env[1429]: time="2024-02-12T19:24:44.432083597Z" level=info msg="cleaning up dead shim" Feb 12 19:24:44.439759 env[1429]: time="2024-02-12T19:24:44.439705718Z" level=info msg="shim disconnected" id=053139a9f850f33fbf5803ee3b0d37c89a1b85de62eb9d5b81406f3710348dea Feb 12 19:24:44.439759 env[1429]: time="2024-02-12T19:24:44.439753237Z" level=warning msg="cleaning up after shim disconnected" id=053139a9f850f33fbf5803ee3b0d37c89a1b85de62eb9d5b81406f3710348dea namespace=k8s.io Feb 12 19:24:44.439759 env[1429]: time="2024-02-12T19:24:44.439763477Z" level=info msg="cleaning up dead shim" Feb 12 19:24:44.441021 env[1429]: time="2024-02-12T19:24:44.440989418Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:24:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4433 runtime=io.containerd.runc.v2\n" Feb 12 19:24:44.441495 env[1429]: time="2024-02-12T19:24:44.441467810Z" level=info msg="TearDown network for sandbox \"fb91fdf617390e8cd93e3d159c3484d7a2ad99d1bb14bfec5442318de3684736\" successfully" Feb 12 19:24:44.441593 env[1429]: time="2024-02-12T19:24:44.441577249Z" level=info msg="StopPodSandbox for \"fb91fdf617390e8cd93e3d159c3484d7a2ad99d1bb14bfec5442318de3684736\" returns successfully" Feb 12 19:24:44.449735 env[1429]: time="2024-02-12T19:24:44.449705402Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:24:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4447 runtime=io.containerd.runc.v2\n" Feb 12 19:24:44.450144 env[1429]: time="2024-02-12T19:24:44.450119835Z" level=info msg="TearDown network for sandbox \"053139a9f850f33fbf5803ee3b0d37c89a1b85de62eb9d5b81406f3710348dea\" successfully" Feb 12 19:24:44.450280 env[1429]: time="2024-02-12T19:24:44.450262593Z" level=info msg="StopPodSandbox for \"053139a9f850f33fbf5803ee3b0d37c89a1b85de62eb9d5b81406f3710348dea\" returns successfully" Feb 12 19:24:44.557147 kubelet[2600]: I0212 19:24:44.555805 2600 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-hubble-tls\") pod \"d0918d1e-df63-47d5-9fe2-7a7a7dab1d43\" (UID: \"d0918d1e-df63-47d5-9fe2-7a7a7dab1d43\") " Feb 12 19:24:44.557147 kubelet[2600]: I0212 19:24:44.555841 2600 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-etc-cni-netd\") pod \"d0918d1e-df63-47d5-9fe2-7a7a7dab1d43\" (UID: \"d0918d1e-df63-47d5-9fe2-7a7a7dab1d43\") " Feb 12 19:24:44.557147 kubelet[2600]: I0212 19:24:44.555872 2600 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7zts\" (UniqueName: \"kubernetes.io/projected/1d83b632-f389-4f26-ac21-b096cfb6251e-kube-api-access-p7zts\") pod \"1d83b632-f389-4f26-ac21-b096cfb6251e\" (UID: \"1d83b632-f389-4f26-ac21-b096cfb6251e\") " Feb 12 19:24:44.557147 kubelet[2600]: I0212 19:24:44.555909 2600 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-cilium-config-path\") pod \"d0918d1e-df63-47d5-9fe2-7a7a7dab1d43\" (UID: \"d0918d1e-df63-47d5-9fe2-7a7a7dab1d43\") " Feb 12 19:24:44.557147 kubelet[2600]: I0212 19:24:44.555926 2600 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-host-proc-sys-net\") pod \"d0918d1e-df63-47d5-9fe2-7a7a7dab1d43\" (UID: \"d0918d1e-df63-47d5-9fe2-7a7a7dab1d43\") " Feb 12 19:24:44.557147 kubelet[2600]: I0212 19:24:44.555956 2600 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-xtables-lock\") pod \"d0918d1e-df63-47d5-9fe2-7a7a7dab1d43\" (UID: \"d0918d1e-df63-47d5-9fe2-7a7a7dab1d43\") " Feb 12 19:24:44.557594 kubelet[2600]: I0212 19:24:44.555973 2600 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-cilium-run\") pod \"d0918d1e-df63-47d5-9fe2-7a7a7dab1d43\" (UID: \"d0918d1e-df63-47d5-9fe2-7a7a7dab1d43\") " Feb 12 19:24:44.557594 kubelet[2600]: I0212 19:24:44.555994 2600 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-bpf-maps\") pod \"d0918d1e-df63-47d5-9fe2-7a7a7dab1d43\" (UID: \"d0918d1e-df63-47d5-9fe2-7a7a7dab1d43\") " Feb 12 19:24:44.557594 kubelet[2600]: I0212 19:24:44.556024 2600 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-clustermesh-secrets\") pod \"d0918d1e-df63-47d5-9fe2-7a7a7dab1d43\" (UID: \"d0918d1e-df63-47d5-9fe2-7a7a7dab1d43\") " Feb 12 19:24:44.557594 kubelet[2600]: I0212 19:24:44.556045 2600 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qphl\" (UniqueName: \"kubernetes.io/projected/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-kube-api-access-8qphl\") pod \"d0918d1e-df63-47d5-9fe2-7a7a7dab1d43\" (UID: \"d0918d1e-df63-47d5-9fe2-7a7a7dab1d43\") " Feb 12 19:24:44.557594 kubelet[2600]: I0212 19:24:44.556062 2600 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-cilium-cgroup\") pod \"d0918d1e-df63-47d5-9fe2-7a7a7dab1d43\" (UID: \"d0918d1e-df63-47d5-9fe2-7a7a7dab1d43\") " Feb 12 19:24:44.557594 kubelet[2600]: I0212 19:24:44.556078 2600 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-lib-modules\") pod \"d0918d1e-df63-47d5-9fe2-7a7a7dab1d43\" (UID: \"d0918d1e-df63-47d5-9fe2-7a7a7dab1d43\") " Feb 12 19:24:44.557734 kubelet[2600]: I0212 19:24:44.556105 2600 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-host-proc-sys-kernel\") pod \"d0918d1e-df63-47d5-9fe2-7a7a7dab1d43\" (UID: \"d0918d1e-df63-47d5-9fe2-7a7a7dab1d43\") " Feb 12 19:24:44.557734 kubelet[2600]: I0212 19:24:44.556124 2600 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-hostproc\") pod \"d0918d1e-df63-47d5-9fe2-7a7a7dab1d43\" (UID: \"d0918d1e-df63-47d5-9fe2-7a7a7dab1d43\") " Feb 12 19:24:44.557734 kubelet[2600]: I0212 19:24:44.556144 2600 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1d83b632-f389-4f26-ac21-b096cfb6251e-cilium-config-path\") pod \"1d83b632-f389-4f26-ac21-b096cfb6251e\" (UID: \"1d83b632-f389-4f26-ac21-b096cfb6251e\") " Feb 12 19:24:44.557734 kubelet[2600]: I0212 19:24:44.556160 2600 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-cni-path\") pod \"d0918d1e-df63-47d5-9fe2-7a7a7dab1d43\" (UID: \"d0918d1e-df63-47d5-9fe2-7a7a7dab1d43\") " Feb 12 19:24:44.557734 kubelet[2600]: I0212 19:24:44.556226 2600 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-cni-path" (OuterVolumeSpecName: "cni-path") pod "d0918d1e-df63-47d5-9fe2-7a7a7dab1d43" (UID: "d0918d1e-df63-47d5-9fe2-7a7a7dab1d43"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:44.557734 kubelet[2600]: I0212 19:24:44.556271 2600 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d0918d1e-df63-47d5-9fe2-7a7a7dab1d43" (UID: "d0918d1e-df63-47d5-9fe2-7a7a7dab1d43"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:44.557865 kubelet[2600]: W0212 19:24:44.556910 2600 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 19:24:44.560607 kubelet[2600]: I0212 19:24:44.558287 2600 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d0918d1e-df63-47d5-9fe2-7a7a7dab1d43" (UID: "d0918d1e-df63-47d5-9fe2-7a7a7dab1d43"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:44.560607 kubelet[2600]: I0212 19:24:44.558324 2600 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d0918d1e-df63-47d5-9fe2-7a7a7dab1d43" (UID: "d0918d1e-df63-47d5-9fe2-7a7a7dab1d43"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:44.560607 kubelet[2600]: I0212 19:24:44.558340 2600 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d0918d1e-df63-47d5-9fe2-7a7a7dab1d43" (UID: "d0918d1e-df63-47d5-9fe2-7a7a7dab1d43"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:44.560607 kubelet[2600]: I0212 19:24:44.558356 2600 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d0918d1e-df63-47d5-9fe2-7a7a7dab1d43" (UID: "d0918d1e-df63-47d5-9fe2-7a7a7dab1d43"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:44.560607 kubelet[2600]: I0212 19:24:44.558373 2600 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d0918d1e-df63-47d5-9fe2-7a7a7dab1d43" (UID: "d0918d1e-df63-47d5-9fe2-7a7a7dab1d43"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:44.560801 kubelet[2600]: I0212 19:24:44.558388 2600 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d0918d1e-df63-47d5-9fe2-7a7a7dab1d43" (UID: "d0918d1e-df63-47d5-9fe2-7a7a7dab1d43"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:44.560801 kubelet[2600]: I0212 19:24:44.558404 2600 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d0918d1e-df63-47d5-9fe2-7a7a7dab1d43" (UID: "d0918d1e-df63-47d5-9fe2-7a7a7dab1d43"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:44.560801 kubelet[2600]: I0212 19:24:44.558681 2600 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-hostproc" (OuterVolumeSpecName: "hostproc") pod "d0918d1e-df63-47d5-9fe2-7a7a7dab1d43" (UID: "d0918d1e-df63-47d5-9fe2-7a7a7dab1d43"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:44.560801 kubelet[2600]: W0212 19:24:44.558809 2600 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/1d83b632-f389-4f26-ac21-b096cfb6251e/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 19:24:44.560801 kubelet[2600]: I0212 19:24:44.558974 2600 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d0918d1e-df63-47d5-9fe2-7a7a7dab1d43" (UID: "d0918d1e-df63-47d5-9fe2-7a7a7dab1d43"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:24:44.560913 kubelet[2600]: I0212 19:24:44.560554 2600 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d83b632-f389-4f26-ac21-b096cfb6251e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1d83b632-f389-4f26-ac21-b096cfb6251e" (UID: "1d83b632-f389-4f26-ac21-b096cfb6251e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:24:44.561440 kubelet[2600]: I0212 19:24:44.561402 2600 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d83b632-f389-4f26-ac21-b096cfb6251e-kube-api-access-p7zts" (OuterVolumeSpecName: "kube-api-access-p7zts") pod "1d83b632-f389-4f26-ac21-b096cfb6251e" (UID: "1d83b632-f389-4f26-ac21-b096cfb6251e"). InnerVolumeSpecName "kube-api-access-p7zts". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:24:44.563037 kubelet[2600]: I0212 19:24:44.562997 2600 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-kube-api-access-8qphl" (OuterVolumeSpecName: "kube-api-access-8qphl") pod "d0918d1e-df63-47d5-9fe2-7a7a7dab1d43" (UID: "d0918d1e-df63-47d5-9fe2-7a7a7dab1d43"). InnerVolumeSpecName "kube-api-access-8qphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:24:44.563662 kubelet[2600]: I0212 19:24:44.563631 2600 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d0918d1e-df63-47d5-9fe2-7a7a7dab1d43" (UID: "d0918d1e-df63-47d5-9fe2-7a7a7dab1d43"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:24:44.565434 kubelet[2600]: I0212 19:24:44.565405 2600 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d0918d1e-df63-47d5-9fe2-7a7a7dab1d43" (UID: "d0918d1e-df63-47d5-9fe2-7a7a7dab1d43"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:24:44.600893 kubelet[2600]: I0212 19:24:44.600864 2600 scope.go:115] "RemoveContainer" containerID="6f7f803ef70af8ea3e4e8c233baa4077e2551d048de42b36c951f8a142f95bfb" Feb 12 19:24:44.604231 env[1429]: time="2024-02-12T19:24:44.604159754Z" level=info msg="RemoveContainer for \"6f7f803ef70af8ea3e4e8c233baa4077e2551d048de42b36c951f8a142f95bfb\"" Feb 12 19:24:44.623808 env[1429]: time="2024-02-12T19:24:44.623752488Z" level=info msg="RemoveContainer for \"6f7f803ef70af8ea3e4e8c233baa4077e2551d048de42b36c951f8a142f95bfb\" returns successfully" Feb 12 19:24:44.624212 kubelet[2600]: I0212 19:24:44.624175 2600 scope.go:115] "RemoveContainer" containerID="1e114594ab181b4e55716198ac546f58b10cef3d9a0df6c57095385bd989b96f" Feb 12 19:24:44.628854 env[1429]: time="2024-02-12T19:24:44.628813770Z" level=info msg="RemoveContainer for \"1e114594ab181b4e55716198ac546f58b10cef3d9a0df6c57095385bd989b96f\"" Feb 12 19:24:44.637848 env[1429]: time="2024-02-12T19:24:44.637803709Z" level=info msg="RemoveContainer for \"1e114594ab181b4e55716198ac546f58b10cef3d9a0df6c57095385bd989b96f\" returns successfully" Feb 12 19:24:44.638213 kubelet[2600]: I0212 19:24:44.638179 2600 scope.go:115] "RemoveContainer" containerID="1ddda07a74d904aa68b502fc1521f084bd9aa97c4e8a336e53e4d08fe2fc89e3" Feb 12 19:24:44.641213 env[1429]: time="2024-02-12T19:24:44.640642505Z" level=info msg="RemoveContainer for \"1ddda07a74d904aa68b502fc1521f084bd9aa97c4e8a336e53e4d08fe2fc89e3\"" Feb 12 19:24:44.652796 env[1429]: time="2024-02-12T19:24:44.652691037Z" level=info msg="RemoveContainer for \"1ddda07a74d904aa68b502fc1521f084bd9aa97c4e8a336e53e4d08fe2fc89e3\" returns successfully" Feb 12 19:24:44.653099 kubelet[2600]: I0212 19:24:44.653075 2600 scope.go:115] "RemoveContainer" containerID="8fe5463976c17d85f2d2a0f4bba69aa1ff9e75ac4c75ddce4d59bf9ed2ec5e4f" Feb 12 19:24:44.658034 kubelet[2600]: I0212 19:24:44.658009 2600 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-hubble-tls\") on node \"ci-3510.3.2-a-7e4be4023b\" DevicePath \"\"" Feb 12 19:24:44.658034 kubelet[2600]: I0212 19:24:44.658041 2600 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-etc-cni-netd\") on node \"ci-3510.3.2-a-7e4be4023b\" DevicePath \"\"" Feb 12 19:24:44.658168 kubelet[2600]: I0212 19:24:44.658054 2600 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-p7zts\" (UniqueName: \"kubernetes.io/projected/1d83b632-f389-4f26-ac21-b096cfb6251e-kube-api-access-p7zts\") on node \"ci-3510.3.2-a-7e4be4023b\" DevicePath \"\"" Feb 12 19:24:44.658168 kubelet[2600]: I0212 19:24:44.658065 2600 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-host-proc-sys-net\") on node \"ci-3510.3.2-a-7e4be4023b\" DevicePath \"\"" Feb 12 19:24:44.658168 kubelet[2600]: I0212 19:24:44.658076 2600 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-xtables-lock\") on node \"ci-3510.3.2-a-7e4be4023b\" DevicePath \"\"" Feb 12 19:24:44.658168 kubelet[2600]: I0212 19:24:44.658086 2600 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-cilium-run\") on node \"ci-3510.3.2-a-7e4be4023b\" DevicePath \"\"" Feb 12 19:24:44.658168 kubelet[2600]: I0212 19:24:44.658097 2600 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-cilium-config-path\") on node \"ci-3510.3.2-a-7e4be4023b\" DevicePath \"\"" Feb 12 19:24:44.658351 kubelet[2600]: I0212 19:24:44.658107 2600 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-bpf-maps\") on node \"ci-3510.3.2-a-7e4be4023b\" DevicePath \"\"" Feb 12 19:24:44.658387 kubelet[2600]: I0212 19:24:44.658356 2600 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-clustermesh-secrets\") on node \"ci-3510.3.2-a-7e4be4023b\" DevicePath \"\"" Feb 12 19:24:44.658387 kubelet[2600]: I0212 19:24:44.658380 2600 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-8qphl\" (UniqueName: \"kubernetes.io/projected/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-kube-api-access-8qphl\") on node \"ci-3510.3.2-a-7e4be4023b\" DevicePath \"\"" Feb 12 19:24:44.658515 kubelet[2600]: I0212 19:24:44.658391 2600 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-cilium-cgroup\") on node \"ci-3510.3.2-a-7e4be4023b\" DevicePath \"\"" Feb 12 19:24:44.658515 kubelet[2600]: I0212 19:24:44.658401 2600 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-lib-modules\") on node \"ci-3510.3.2-a-7e4be4023b\" DevicePath \"\"" Feb 12 19:24:44.658515 kubelet[2600]: I0212 19:24:44.658411 2600 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-7e4be4023b\" DevicePath \"\"" Feb 12 19:24:44.658515 kubelet[2600]: I0212 19:24:44.658420 2600 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-hostproc\") on node \"ci-3510.3.2-a-7e4be4023b\" DevicePath \"\"" Feb 12 19:24:44.658515 kubelet[2600]: I0212 19:24:44.658431 2600 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1d83b632-f389-4f26-ac21-b096cfb6251e-cilium-config-path\") on node \"ci-3510.3.2-a-7e4be4023b\" DevicePath \"\"" Feb 12 19:24:44.658515 kubelet[2600]: I0212 19:24:44.658442 2600 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43-cni-path\") on node \"ci-3510.3.2-a-7e4be4023b\" DevicePath \"\"" Feb 12 19:24:44.658879 env[1429]: time="2024-02-12T19:24:44.658840461Z" level=info msg="RemoveContainer for \"8fe5463976c17d85f2d2a0f4bba69aa1ff9e75ac4c75ddce4d59bf9ed2ec5e4f\"" Feb 12 19:24:44.666038 env[1429]: time="2024-02-12T19:24:44.665999270Z" level=info msg="RemoveContainer for \"8fe5463976c17d85f2d2a0f4bba69aa1ff9e75ac4c75ddce4d59bf9ed2ec5e4f\" returns successfully" Feb 12 19:24:44.666219 kubelet[2600]: I0212 19:24:44.666197 2600 scope.go:115] "RemoveContainer" containerID="504d6c95953bb42e344044c12154452a179ef492e9cbdbee8735dd301386cd3e" Feb 12 19:24:44.667094 env[1429]: time="2024-02-12T19:24:44.667071093Z" level=info msg="RemoveContainer for \"504d6c95953bb42e344044c12154452a179ef492e9cbdbee8735dd301386cd3e\"" Feb 12 19:24:44.677068 env[1429]: time="2024-02-12T19:24:44.677037178Z" level=info msg="RemoveContainer for \"504d6c95953bb42e344044c12154452a179ef492e9cbdbee8735dd301386cd3e\" returns successfully" Feb 12 19:24:44.677409 kubelet[2600]: I0212 19:24:44.677374 2600 scope.go:115] "RemoveContainer" containerID="6f7f803ef70af8ea3e4e8c233baa4077e2551d048de42b36c951f8a142f95bfb" Feb 12 19:24:44.677798 env[1429]: time="2024-02-12T19:24:44.677731887Z" level=error msg="ContainerStatus for \"6f7f803ef70af8ea3e4e8c233baa4077e2551d048de42b36c951f8a142f95bfb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6f7f803ef70af8ea3e4e8c233baa4077e2551d048de42b36c951f8a142f95bfb\": not found" Feb 12 19:24:44.678027 kubelet[2600]: E0212 19:24:44.677998 2600 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6f7f803ef70af8ea3e4e8c233baa4077e2551d048de42b36c951f8a142f95bfb\": not found" containerID="6f7f803ef70af8ea3e4e8c233baa4077e2551d048de42b36c951f8a142f95bfb" Feb 12 19:24:44.678084 kubelet[2600]: I0212 19:24:44.678043 2600 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:6f7f803ef70af8ea3e4e8c233baa4077e2551d048de42b36c951f8a142f95bfb} err="failed to get container status \"6f7f803ef70af8ea3e4e8c233baa4077e2551d048de42b36c951f8a142f95bfb\": rpc error: code = NotFound desc = an error occurred when try to find container \"6f7f803ef70af8ea3e4e8c233baa4077e2551d048de42b36c951f8a142f95bfb\": not found" Feb 12 19:24:44.678084 kubelet[2600]: I0212 19:24:44.678054 2600 scope.go:115] "RemoveContainer" containerID="1e114594ab181b4e55716198ac546f58b10cef3d9a0df6c57095385bd989b96f" Feb 12 19:24:44.678298 env[1429]: time="2024-02-12T19:24:44.678258079Z" level=error msg="ContainerStatus for \"1e114594ab181b4e55716198ac546f58b10cef3d9a0df6c57095385bd989b96f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1e114594ab181b4e55716198ac546f58b10cef3d9a0df6c57095385bd989b96f\": not found" Feb 12 19:24:44.678480 kubelet[2600]: E0212 19:24:44.678467 2600 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1e114594ab181b4e55716198ac546f58b10cef3d9a0df6c57095385bd989b96f\": not found" containerID="1e114594ab181b4e55716198ac546f58b10cef3d9a0df6c57095385bd989b96f" Feb 12 19:24:44.678567 kubelet[2600]: I0212 19:24:44.678557 2600 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:1e114594ab181b4e55716198ac546f58b10cef3d9a0df6c57095385bd989b96f} err="failed to get container status \"1e114594ab181b4e55716198ac546f58b10cef3d9a0df6c57095385bd989b96f\": rpc error: code = NotFound desc = an error occurred when try to find container \"1e114594ab181b4e55716198ac546f58b10cef3d9a0df6c57095385bd989b96f\": not found" Feb 12 19:24:44.678628 kubelet[2600]: I0212 19:24:44.678619 2600 scope.go:115] "RemoveContainer" containerID="1ddda07a74d904aa68b502fc1521f084bd9aa97c4e8a336e53e4d08fe2fc89e3" Feb 12 19:24:44.678849 env[1429]: time="2024-02-12T19:24:44.678810030Z" level=error msg="ContainerStatus for \"1ddda07a74d904aa68b502fc1521f084bd9aa97c4e8a336e53e4d08fe2fc89e3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1ddda07a74d904aa68b502fc1521f084bd9aa97c4e8a336e53e4d08fe2fc89e3\": not found" Feb 12 19:24:44.679017 kubelet[2600]: E0212 19:24:44.679000 2600 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1ddda07a74d904aa68b502fc1521f084bd9aa97c4e8a336e53e4d08fe2fc89e3\": not found" containerID="1ddda07a74d904aa68b502fc1521f084bd9aa97c4e8a336e53e4d08fe2fc89e3" Feb 12 19:24:44.679081 kubelet[2600]: I0212 19:24:44.679050 2600 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:1ddda07a74d904aa68b502fc1521f084bd9aa97c4e8a336e53e4d08fe2fc89e3} err="failed to get container status \"1ddda07a74d904aa68b502fc1521f084bd9aa97c4e8a336e53e4d08fe2fc89e3\": rpc error: code = NotFound desc = an error occurred when try to find container \"1ddda07a74d904aa68b502fc1521f084bd9aa97c4e8a336e53e4d08fe2fc89e3\": not found" Feb 12 19:24:44.679081 kubelet[2600]: I0212 19:24:44.679060 2600 scope.go:115] "RemoveContainer" containerID="8fe5463976c17d85f2d2a0f4bba69aa1ff9e75ac4c75ddce4d59bf9ed2ec5e4f" Feb 12 19:24:44.679302 env[1429]: time="2024-02-12T19:24:44.679257943Z" level=error msg="ContainerStatus for \"8fe5463976c17d85f2d2a0f4bba69aa1ff9e75ac4c75ddce4d59bf9ed2ec5e4f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8fe5463976c17d85f2d2a0f4bba69aa1ff9e75ac4c75ddce4d59bf9ed2ec5e4f\": not found" Feb 12 19:24:44.679454 kubelet[2600]: E0212 19:24:44.679439 2600 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8fe5463976c17d85f2d2a0f4bba69aa1ff9e75ac4c75ddce4d59bf9ed2ec5e4f\": not found" containerID="8fe5463976c17d85f2d2a0f4bba69aa1ff9e75ac4c75ddce4d59bf9ed2ec5e4f" Feb 12 19:24:44.679510 kubelet[2600]: I0212 19:24:44.679461 2600 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:8fe5463976c17d85f2d2a0f4bba69aa1ff9e75ac4c75ddce4d59bf9ed2ec5e4f} err="failed to get container status \"8fe5463976c17d85f2d2a0f4bba69aa1ff9e75ac4c75ddce4d59bf9ed2ec5e4f\": rpc error: code = NotFound desc = an error occurred when try to find container \"8fe5463976c17d85f2d2a0f4bba69aa1ff9e75ac4c75ddce4d59bf9ed2ec5e4f\": not found" Feb 12 19:24:44.679510 kubelet[2600]: I0212 19:24:44.679479 2600 scope.go:115] "RemoveContainer" containerID="504d6c95953bb42e344044c12154452a179ef492e9cbdbee8735dd301386cd3e" Feb 12 19:24:44.679690 env[1429]: time="2024-02-12T19:24:44.679652377Z" level=error msg="ContainerStatus for \"504d6c95953bb42e344044c12154452a179ef492e9cbdbee8735dd301386cd3e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"504d6c95953bb42e344044c12154452a179ef492e9cbdbee8735dd301386cd3e\": not found" Feb 12 19:24:44.679866 kubelet[2600]: E0212 19:24:44.679854 2600 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"504d6c95953bb42e344044c12154452a179ef492e9cbdbee8735dd301386cd3e\": not found" containerID="504d6c95953bb42e344044c12154452a179ef492e9cbdbee8735dd301386cd3e" Feb 12 19:24:44.679951 kubelet[2600]: I0212 19:24:44.679942 2600 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:504d6c95953bb42e344044c12154452a179ef492e9cbdbee8735dd301386cd3e} err="failed to get container status \"504d6c95953bb42e344044c12154452a179ef492e9cbdbee8735dd301386cd3e\": rpc error: code = NotFound desc = an error occurred when try to find container \"504d6c95953bb42e344044c12154452a179ef492e9cbdbee8735dd301386cd3e\": not found" Feb 12 19:24:44.680008 kubelet[2600]: I0212 19:24:44.679999 2600 scope.go:115] "RemoveContainer" containerID="5d5b9df1863a564439352383146cbf76d5b824836e57f9fcc7e7fede8eba4537" Feb 12 19:24:44.680933 env[1429]: time="2024-02-12T19:24:44.680911517Z" level=info msg="RemoveContainer for \"5d5b9df1863a564439352383146cbf76d5b824836e57f9fcc7e7fede8eba4537\"" Feb 12 19:24:44.688440 env[1429]: time="2024-02-12T19:24:44.688412720Z" level=info msg="RemoveContainer for \"5d5b9df1863a564439352383146cbf76d5b824836e57f9fcc7e7fede8eba4537\" returns successfully" Feb 12 19:24:44.688804 kubelet[2600]: I0212 19:24:44.688781 2600 scope.go:115] "RemoveContainer" containerID="5d5b9df1863a564439352383146cbf76d5b824836e57f9fcc7e7fede8eba4537" Feb 12 19:24:44.689037 env[1429]: time="2024-02-12T19:24:44.688984831Z" level=error msg="ContainerStatus for \"5d5b9df1863a564439352383146cbf76d5b824836e57f9fcc7e7fede8eba4537\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5d5b9df1863a564439352383146cbf76d5b824836e57f9fcc7e7fede8eba4537\": not found" Feb 12 19:24:44.689173 kubelet[2600]: E0212 19:24:44.689132 2600 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5d5b9df1863a564439352383146cbf76d5b824836e57f9fcc7e7fede8eba4537\": not found" containerID="5d5b9df1863a564439352383146cbf76d5b824836e57f9fcc7e7fede8eba4537" Feb 12 19:24:44.689289 kubelet[2600]: I0212 19:24:44.689274 2600 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:5d5b9df1863a564439352383146cbf76d5b824836e57f9fcc7e7fede8eba4537} err="failed to get container status \"5d5b9df1863a564439352383146cbf76d5b824836e57f9fcc7e7fede8eba4537\": rpc error: code = NotFound desc = an error occurred when try to find container \"5d5b9df1863a564439352383146cbf76d5b824836e57f9fcc7e7fede8eba4537\": not found" Feb 12 19:24:45.202830 kubelet[2600]: E0212 19:24:45.202799 2600 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 19:24:45.234139 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-053139a9f850f33fbf5803ee3b0d37c89a1b85de62eb9d5b81406f3710348dea-rootfs.mount: Deactivated successfully. Feb 12 19:24:45.234304 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-053139a9f850f33fbf5803ee3b0d37c89a1b85de62eb9d5b81406f3710348dea-shm.mount: Deactivated successfully. Feb 12 19:24:45.234391 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb91fdf617390e8cd93e3d159c3484d7a2ad99d1bb14bfec5442318de3684736-rootfs.mount: Deactivated successfully. Feb 12 19:24:45.234477 systemd[1]: var-lib-kubelet-pods-d0918d1e\x2ddf63\x2d47d5\x2d9fe2\x2d7a7a7dab1d43-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8qphl.mount: Deactivated successfully. Feb 12 19:24:45.234560 systemd[1]: var-lib-kubelet-pods-1d83b632\x2df389\x2d4f26\x2dac21\x2db096cfb6251e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp7zts.mount: Deactivated successfully. Feb 12 19:24:45.234633 systemd[1]: var-lib-kubelet-pods-d0918d1e\x2ddf63\x2d47d5\x2d9fe2\x2d7a7a7dab1d43-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 19:24:45.234720 systemd[1]: var-lib-kubelet-pods-d0918d1e\x2ddf63\x2d47d5\x2d9fe2\x2d7a7a7dab1d43-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 19:24:45.355867 kubelet[2600]: I0212 19:24:45.355838 2600 setters.go:548] "Node became not ready" node="ci-3510.3.2-a-7e4be4023b" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-12 19:24:45.355767524 +0000 UTC m=+245.504401894 LastTransitionTime:2024-02-12 19:24:45.355767524 +0000 UTC m=+245.504401894 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 12 19:24:46.142564 env[1429]: time="2024-02-12T19:24:46.142522622Z" level=info msg="StopContainer for \"5d5b9df1863a564439352383146cbf76d5b824836e57f9fcc7e7fede8eba4537\" with timeout 1 (s)" Feb 12 19:24:46.143019 env[1429]: time="2024-02-12T19:24:46.142964936Z" level=error msg="StopContainer for \"5d5b9df1863a564439352383146cbf76d5b824836e57f9fcc7e7fede8eba4537\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5d5b9df1863a564439352383146cbf76d5b824836e57f9fcc7e7fede8eba4537\": not found" Feb 12 19:24:46.143136 env[1429]: time="2024-02-12T19:24:46.142872537Z" level=info msg="StopContainer for \"6f7f803ef70af8ea3e4e8c233baa4077e2551d048de42b36c951f8a142f95bfb\" with timeout 1 (s)" Feb 12 19:24:46.143271 env[1429]: time="2024-02-12T19:24:46.143228052Z" level=error msg="StopContainer for \"6f7f803ef70af8ea3e4e8c233baa4077e2551d048de42b36c951f8a142f95bfb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6f7f803ef70af8ea3e4e8c233baa4077e2551d048de42b36c951f8a142f95bfb\": not found" Feb 12 19:24:46.143442 kubelet[2600]: E0212 19:24:46.143426 2600 remote_runtime.go:349] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5d5b9df1863a564439352383146cbf76d5b824836e57f9fcc7e7fede8eba4537\": not found" containerID="5d5b9df1863a564439352383146cbf76d5b824836e57f9fcc7e7fede8eba4537" Feb 12 19:24:46.144009 kubelet[2600]: E0212 19:24:46.143790 2600 remote_runtime.go:349] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6f7f803ef70af8ea3e4e8c233baa4077e2551d048de42b36c951f8a142f95bfb\": not found" containerID="6f7f803ef70af8ea3e4e8c233baa4077e2551d048de42b36c951f8a142f95bfb" Feb 12 19:24:46.144250 env[1429]: time="2024-02-12T19:24:46.144226558Z" level=info msg="StopPodSandbox for \"fb91fdf617390e8cd93e3d159c3484d7a2ad99d1bb14bfec5442318de3684736\"" Feb 12 19:24:46.144408 env[1429]: time="2024-02-12T19:24:46.144371756Z" level=info msg="TearDown network for sandbox \"fb91fdf617390e8cd93e3d159c3484d7a2ad99d1bb14bfec5442318de3684736\" successfully" Feb 12 19:24:46.144473 env[1429]: time="2024-02-12T19:24:46.144457914Z" level=info msg="StopPodSandbox for \"fb91fdf617390e8cd93e3d159c3484d7a2ad99d1bb14bfec5442318de3684736\" returns successfully" Feb 12 19:24:46.144753 env[1429]: time="2024-02-12T19:24:46.144733830Z" level=info msg="StopPodSandbox for \"053139a9f850f33fbf5803ee3b0d37c89a1b85de62eb9d5b81406f3710348dea\"" Feb 12 19:24:46.144900 env[1429]: time="2024-02-12T19:24:46.144864749Z" level=info msg="TearDown network for sandbox \"053139a9f850f33fbf5803ee3b0d37c89a1b85de62eb9d5b81406f3710348dea\" successfully" Feb 12 19:24:46.144967 env[1429]: time="2024-02-12T19:24:46.144950787Z" level=info msg="StopPodSandbox for \"053139a9f850f33fbf5803ee3b0d37c89a1b85de62eb9d5b81406f3710348dea\" returns successfully" Feb 12 19:24:46.145277 kubelet[2600]: I0212 19:24:46.145253 2600 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=1d83b632-f389-4f26-ac21-b096cfb6251e path="/var/lib/kubelet/pods/1d83b632-f389-4f26-ac21-b096cfb6251e/volumes" Feb 12 19:24:46.145646 kubelet[2600]: I0212 19:24:46.145623 2600 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=d0918d1e-df63-47d5-9fe2-7a7a7dab1d43 path="/var/lib/kubelet/pods/d0918d1e-df63-47d5-9fe2-7a7a7dab1d43/volumes" Feb 12 19:24:46.249395 sshd[4298]: pam_unix(sshd:session): session closed for user core Feb 12 19:24:46.252402 systemd[1]: sshd@20-10.200.20.34:22-10.200.12.6:32952.service: Deactivated successfully. Feb 12 19:24:46.253159 systemd[1]: session-23.scope: Deactivated successfully. Feb 12 19:24:46.254033 systemd-logind[1416]: Session 23 logged out. Waiting for processes to exit. Feb 12 19:24:46.254943 systemd-logind[1416]: Removed session 23. Feb 12 19:24:46.318082 systemd[1]: Started sshd@21-10.200.20.34:22-10.200.12.6:32958.service. Feb 12 19:24:46.742564 sshd[4465]: Accepted publickey for core from 10.200.12.6 port 32958 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:24:46.744217 sshd[4465]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:24:46.747983 systemd-logind[1416]: New session 24 of user core. Feb 12 19:24:46.748428 systemd[1]: Started session-24.scope. Feb 12 19:24:47.615040 kubelet[2600]: I0212 19:24:47.615000 2600 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:24:47.615485 kubelet[2600]: E0212 19:24:47.615472 2600 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d0918d1e-df63-47d5-9fe2-7a7a7dab1d43" containerName="mount-cgroup" Feb 12 19:24:47.615574 kubelet[2600]: E0212 19:24:47.615565 2600 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d0918d1e-df63-47d5-9fe2-7a7a7dab1d43" containerName="clean-cilium-state" Feb 12 19:24:47.615641 kubelet[2600]: E0212 19:24:47.615621 2600 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d0918d1e-df63-47d5-9fe2-7a7a7dab1d43" containerName="cilium-agent" Feb 12 19:24:47.615695 kubelet[2600]: E0212 19:24:47.615687 2600 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1d83b632-f389-4f26-ac21-b096cfb6251e" containerName="cilium-operator" Feb 12 19:24:47.615769 kubelet[2600]: E0212 19:24:47.615759 2600 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d0918d1e-df63-47d5-9fe2-7a7a7dab1d43" containerName="apply-sysctl-overwrites" Feb 12 19:24:47.615828 kubelet[2600]: E0212 19:24:47.615814 2600 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d0918d1e-df63-47d5-9fe2-7a7a7dab1d43" containerName="mount-bpf-fs" Feb 12 19:24:47.615912 kubelet[2600]: I0212 19:24:47.615903 2600 memory_manager.go:346] "RemoveStaleState removing state" podUID="d0918d1e-df63-47d5-9fe2-7a7a7dab1d43" containerName="cilium-agent" Feb 12 19:24:47.615975 kubelet[2600]: I0212 19:24:47.615967 2600 memory_manager.go:346] "RemoveStaleState removing state" podUID="1d83b632-f389-4f26-ac21-b096cfb6251e" containerName="cilium-operator" Feb 12 19:24:47.659670 sshd[4465]: pam_unix(sshd:session): session closed for user core Feb 12 19:24:47.662420 systemd-logind[1416]: Session 24 logged out. Waiting for processes to exit. Feb 12 19:24:47.662557 systemd[1]: sshd@21-10.200.20.34:22-10.200.12.6:32958.service: Deactivated successfully. Feb 12 19:24:47.663356 systemd[1]: session-24.scope: Deactivated successfully. Feb 12 19:24:47.663868 systemd-logind[1416]: Removed session 24. Feb 12 19:24:47.672234 kubelet[2600]: I0212 19:24:47.672180 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-xtables-lock\") pod \"cilium-xjfjw\" (UID: \"6def14aa-e658-4f0b-8dd5-e554c3fb4b58\") " pod="kube-system/cilium-xjfjw" Feb 12 19:24:47.672439 kubelet[2600]: I0212 19:24:47.672426 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-host-proc-sys-kernel\") pod \"cilium-xjfjw\" (UID: \"6def14aa-e658-4f0b-8dd5-e554c3fb4b58\") " pod="kube-system/cilium-xjfjw" Feb 12 19:24:47.672525 kubelet[2600]: I0212 19:24:47.672516 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-hubble-tls\") pod \"cilium-xjfjw\" (UID: \"6def14aa-e658-4f0b-8dd5-e554c3fb4b58\") " pod="kube-system/cilium-xjfjw" Feb 12 19:24:47.672609 kubelet[2600]: I0212 19:24:47.672600 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-etc-cni-netd\") pod \"cilium-xjfjw\" (UID: \"6def14aa-e658-4f0b-8dd5-e554c3fb4b58\") " pod="kube-system/cilium-xjfjw" Feb 12 19:24:47.672695 kubelet[2600]: I0212 19:24:47.672687 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-hostproc\") pod \"cilium-xjfjw\" (UID: \"6def14aa-e658-4f0b-8dd5-e554c3fb4b58\") " pod="kube-system/cilium-xjfjw" Feb 12 19:24:47.672780 kubelet[2600]: I0212 19:24:47.672771 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-cilium-ipsec-secrets\") pod \"cilium-xjfjw\" (UID: \"6def14aa-e658-4f0b-8dd5-e554c3fb4b58\") " pod="kube-system/cilium-xjfjw" Feb 12 19:24:47.672870 kubelet[2600]: I0212 19:24:47.672862 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9qls\" (UniqueName: \"kubernetes.io/projected/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-kube-api-access-f9qls\") pod \"cilium-xjfjw\" (UID: \"6def14aa-e658-4f0b-8dd5-e554c3fb4b58\") " pod="kube-system/cilium-xjfjw" Feb 12 19:24:47.672958 kubelet[2600]: I0212 19:24:47.672949 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-cilium-run\") pod \"cilium-xjfjw\" (UID: \"6def14aa-e658-4f0b-8dd5-e554c3fb4b58\") " pod="kube-system/cilium-xjfjw" Feb 12 19:24:47.673041 kubelet[2600]: I0212 19:24:47.673032 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-lib-modules\") pod \"cilium-xjfjw\" (UID: \"6def14aa-e658-4f0b-8dd5-e554c3fb4b58\") " pod="kube-system/cilium-xjfjw" Feb 12 19:24:47.673234 kubelet[2600]: I0212 19:24:47.673221 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-clustermesh-secrets\") pod \"cilium-xjfjw\" (UID: \"6def14aa-e658-4f0b-8dd5-e554c3fb4b58\") " pod="kube-system/cilium-xjfjw" Feb 12 19:24:47.673343 kubelet[2600]: I0212 19:24:47.673330 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-cilium-cgroup\") pod \"cilium-xjfjw\" (UID: \"6def14aa-e658-4f0b-8dd5-e554c3fb4b58\") " pod="kube-system/cilium-xjfjw" Feb 12 19:24:47.673464 kubelet[2600]: I0212 19:24:47.673454 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-cni-path\") pod \"cilium-xjfjw\" (UID: \"6def14aa-e658-4f0b-8dd5-e554c3fb4b58\") " pod="kube-system/cilium-xjfjw" Feb 12 19:24:47.673569 kubelet[2600]: I0212 19:24:47.673560 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-host-proc-sys-net\") pod \"cilium-xjfjw\" (UID: \"6def14aa-e658-4f0b-8dd5-e554c3fb4b58\") " pod="kube-system/cilium-xjfjw" Feb 12 19:24:47.673674 kubelet[2600]: I0212 19:24:47.673665 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-bpf-maps\") pod \"cilium-xjfjw\" (UID: \"6def14aa-e658-4f0b-8dd5-e554c3fb4b58\") " pod="kube-system/cilium-xjfjw" Feb 12 19:24:47.673780 kubelet[2600]: I0212 19:24:47.673770 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-cilium-config-path\") pod \"cilium-xjfjw\" (UID: \"6def14aa-e658-4f0b-8dd5-e554c3fb4b58\") " pod="kube-system/cilium-xjfjw" Feb 12 19:24:47.727911 systemd[1]: Started sshd@22-10.200.20.34:22-10.200.12.6:36496.service. Feb 12 19:24:47.935302 env[1429]: time="2024-02-12T19:24:47.934503998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xjfjw,Uid:6def14aa-e658-4f0b-8dd5-e554c3fb4b58,Namespace:kube-system,Attempt:0,}" Feb 12 19:24:47.965127 env[1429]: time="2024-02-12T19:24:47.965039585Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:24:47.965127 env[1429]: time="2024-02-12T19:24:47.965085505Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:24:47.965374 env[1429]: time="2024-02-12T19:24:47.965327021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:24:47.966296 env[1429]: time="2024-02-12T19:24:47.965596818Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b78101e7f409f31a5513c156f820acd5a22fcd74e069147b6de504622f2bcf8b pid=4490 runtime=io.containerd.runc.v2 Feb 12 19:24:48.005269 env[1429]: time="2024-02-12T19:24:48.005219964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xjfjw,Uid:6def14aa-e658-4f0b-8dd5-e554c3fb4b58,Namespace:kube-system,Attempt:0,} returns sandbox id \"b78101e7f409f31a5513c156f820acd5a22fcd74e069147b6de504622f2bcf8b\"" Feb 12 19:24:48.011123 env[1429]: time="2024-02-12T19:24:48.011078449Z" level=info msg="CreateContainer within sandbox \"b78101e7f409f31a5513c156f820acd5a22fcd74e069147b6de504622f2bcf8b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:24:48.042335 env[1429]: time="2024-02-12T19:24:48.042276808Z" level=info msg="CreateContainer within sandbox \"b78101e7f409f31a5513c156f820acd5a22fcd74e069147b6de504622f2bcf8b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e58fc0c41ebb5701365db52ab28d0c887b6f20b98fc4162954036976392fbe88\"" Feb 12 19:24:48.044465 env[1429]: time="2024-02-12T19:24:48.043165356Z" level=info msg="StartContainer for \"e58fc0c41ebb5701365db52ab28d0c887b6f20b98fc4162954036976392fbe88\"" Feb 12 19:24:48.089011 env[1429]: time="2024-02-12T19:24:48.088719410Z" level=info msg="StartContainer for \"e58fc0c41ebb5701365db52ab28d0c887b6f20b98fc4162954036976392fbe88\" returns successfully" Feb 12 19:24:48.142484 env[1429]: time="2024-02-12T19:24:48.142082204Z" level=info msg="shim disconnected" id=e58fc0c41ebb5701365db52ab28d0c887b6f20b98fc4162954036976392fbe88 Feb 12 19:24:48.142484 env[1429]: time="2024-02-12T19:24:48.142130883Z" level=warning msg="cleaning up after shim disconnected" id=e58fc0c41ebb5701365db52ab28d0c887b6f20b98fc4162954036976392fbe88 namespace=k8s.io Feb 12 19:24:48.142484 env[1429]: time="2024-02-12T19:24:48.142140563Z" level=info msg="cleaning up dead shim" Feb 12 19:24:48.147722 sshd[4476]: Accepted publickey for core from 10.200.12.6 port 36496 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:24:48.148250 sshd[4476]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:24:48.152919 systemd[1]: Started session-25.scope. Feb 12 19:24:48.153128 systemd-logind[1416]: New session 25 of user core. Feb 12 19:24:48.162229 env[1429]: time="2024-02-12T19:24:48.162171025Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:24:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4574 runtime=io.containerd.runc.v2\n" Feb 12 19:24:48.524452 sshd[4476]: pam_unix(sshd:session): session closed for user core Feb 12 19:24:48.526941 systemd[1]: sshd@22-10.200.20.34:22-10.200.12.6:36496.service: Deactivated successfully. Feb 12 19:24:48.528243 systemd[1]: session-25.scope: Deactivated successfully. Feb 12 19:24:48.528585 systemd-logind[1416]: Session 25 logged out. Waiting for processes to exit. Feb 12 19:24:48.529543 systemd-logind[1416]: Removed session 25. Feb 12 19:24:48.604953 systemd[1]: Started sshd@23-10.200.20.34:22-10.200.12.6:36500.service. Feb 12 19:24:48.638324 env[1429]: time="2024-02-12T19:24:48.638285821Z" level=info msg="StopPodSandbox for \"b78101e7f409f31a5513c156f820acd5a22fcd74e069147b6de504622f2bcf8b\"" Feb 12 19:24:48.638621 env[1429]: time="2024-02-12T19:24:48.638598097Z" level=info msg="Container to stop \"e58fc0c41ebb5701365db52ab28d0c887b6f20b98fc4162954036976392fbe88\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:24:48.692975 env[1429]: time="2024-02-12T19:24:48.692929359Z" level=info msg="shim disconnected" id=b78101e7f409f31a5513c156f820acd5a22fcd74e069147b6de504622f2bcf8b Feb 12 19:24:48.693236 env[1429]: time="2024-02-12T19:24:48.693217475Z" level=warning msg="cleaning up after shim disconnected" id=b78101e7f409f31a5513c156f820acd5a22fcd74e069147b6de504622f2bcf8b namespace=k8s.io Feb 12 19:24:48.693316 env[1429]: time="2024-02-12T19:24:48.693303074Z" level=info msg="cleaning up dead shim" Feb 12 19:24:48.700591 env[1429]: time="2024-02-12T19:24:48.700550821Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:24:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4621 runtime=io.containerd.runc.v2\n" Feb 12 19:24:48.701022 env[1429]: time="2024-02-12T19:24:48.700998015Z" level=info msg="TearDown network for sandbox \"b78101e7f409f31a5513c156f820acd5a22fcd74e069147b6de504622f2bcf8b\" successfully" Feb 12 19:24:48.701109 env[1429]: time="2024-02-12T19:24:48.701093494Z" level=info msg="StopPodSandbox for \"b78101e7f409f31a5513c156f820acd5a22fcd74e069147b6de504622f2bcf8b\" returns successfully" Feb 12 19:24:48.779410 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b78101e7f409f31a5513c156f820acd5a22fcd74e069147b6de504622f2bcf8b-shm.mount: Deactivated successfully. Feb 12 19:24:48.880890 kubelet[2600]: I0212 19:24:48.880233 2600 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-clustermesh-secrets\") pod \"6def14aa-e658-4f0b-8dd5-e554c3fb4b58\" (UID: \"6def14aa-e658-4f0b-8dd5-e554c3fb4b58\") " Feb 12 19:24:48.880890 kubelet[2600]: I0212 19:24:48.880279 2600 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-etc-cni-netd\") pod \"6def14aa-e658-4f0b-8dd5-e554c3fb4b58\" (UID: \"6def14aa-e658-4f0b-8dd5-e554c3fb4b58\") " Feb 12 19:24:48.880890 kubelet[2600]: I0212 19:24:48.880297 2600 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-hostproc\") pod \"6def14aa-e658-4f0b-8dd5-e554c3fb4b58\" (UID: \"6def14aa-e658-4f0b-8dd5-e554c3fb4b58\") " Feb 12 19:24:48.880890 kubelet[2600]: I0212 19:24:48.880313 2600 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-cni-path\") pod \"6def14aa-e658-4f0b-8dd5-e554c3fb4b58\" (UID: \"6def14aa-e658-4f0b-8dd5-e554c3fb4b58\") " Feb 12 19:24:48.880890 kubelet[2600]: I0212 19:24:48.880330 2600 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-bpf-maps\") pod \"6def14aa-e658-4f0b-8dd5-e554c3fb4b58\" (UID: \"6def14aa-e658-4f0b-8dd5-e554c3fb4b58\") " Feb 12 19:24:48.880890 kubelet[2600]: I0212 19:24:48.880353 2600 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-cilium-ipsec-secrets\") pod \"6def14aa-e658-4f0b-8dd5-e554c3fb4b58\" (UID: \"6def14aa-e658-4f0b-8dd5-e554c3fb4b58\") " Feb 12 19:24:48.881479 kubelet[2600]: I0212 19:24:48.880379 2600 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-hubble-tls\") pod \"6def14aa-e658-4f0b-8dd5-e554c3fb4b58\" (UID: \"6def14aa-e658-4f0b-8dd5-e554c3fb4b58\") " Feb 12 19:24:48.881479 kubelet[2600]: I0212 19:24:48.880396 2600 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-cilium-cgroup\") pod \"6def14aa-e658-4f0b-8dd5-e554c3fb4b58\" (UID: \"6def14aa-e658-4f0b-8dd5-e554c3fb4b58\") " Feb 12 19:24:48.881479 kubelet[2600]: I0212 19:24:48.880415 2600 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-xtables-lock\") pod \"6def14aa-e658-4f0b-8dd5-e554c3fb4b58\" (UID: \"6def14aa-e658-4f0b-8dd5-e554c3fb4b58\") " Feb 12 19:24:48.881479 kubelet[2600]: I0212 19:24:48.880432 2600 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-host-proc-sys-kernel\") pod \"6def14aa-e658-4f0b-8dd5-e554c3fb4b58\" (UID: \"6def14aa-e658-4f0b-8dd5-e554c3fb4b58\") " Feb 12 19:24:48.881479 kubelet[2600]: I0212 19:24:48.880449 2600 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-lib-modules\") pod \"6def14aa-e658-4f0b-8dd5-e554c3fb4b58\" (UID: \"6def14aa-e658-4f0b-8dd5-e554c3fb4b58\") " Feb 12 19:24:48.881479 kubelet[2600]: I0212 19:24:48.880468 2600 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-cilium-run\") pod \"6def14aa-e658-4f0b-8dd5-e554c3fb4b58\" (UID: \"6def14aa-e658-4f0b-8dd5-e554c3fb4b58\") " Feb 12 19:24:48.881658 kubelet[2600]: I0212 19:24:48.880487 2600 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-host-proc-sys-net\") pod \"6def14aa-e658-4f0b-8dd5-e554c3fb4b58\" (UID: \"6def14aa-e658-4f0b-8dd5-e554c3fb4b58\") " Feb 12 19:24:48.881658 kubelet[2600]: I0212 19:24:48.880506 2600 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-cilium-config-path\") pod \"6def14aa-e658-4f0b-8dd5-e554c3fb4b58\" (UID: \"6def14aa-e658-4f0b-8dd5-e554c3fb4b58\") " Feb 12 19:24:48.881658 kubelet[2600]: I0212 19:24:48.880526 2600 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9qls\" (UniqueName: \"kubernetes.io/projected/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-kube-api-access-f9qls\") pod \"6def14aa-e658-4f0b-8dd5-e554c3fb4b58\" (UID: \"6def14aa-e658-4f0b-8dd5-e554c3fb4b58\") " Feb 12 19:24:48.881658 kubelet[2600]: I0212 19:24:48.880898 2600 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6def14aa-e658-4f0b-8dd5-e554c3fb4b58" (UID: "6def14aa-e658-4f0b-8dd5-e554c3fb4b58"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:48.881658 kubelet[2600]: I0212 19:24:48.880944 2600 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6def14aa-e658-4f0b-8dd5-e554c3fb4b58" (UID: "6def14aa-e658-4f0b-8dd5-e554c3fb4b58"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:48.881767 kubelet[2600]: I0212 19:24:48.880962 2600 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-hostproc" (OuterVolumeSpecName: "hostproc") pod "6def14aa-e658-4f0b-8dd5-e554c3fb4b58" (UID: "6def14aa-e658-4f0b-8dd5-e554c3fb4b58"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:48.881767 kubelet[2600]: I0212 19:24:48.880980 2600 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-cni-path" (OuterVolumeSpecName: "cni-path") pod "6def14aa-e658-4f0b-8dd5-e554c3fb4b58" (UID: "6def14aa-e658-4f0b-8dd5-e554c3fb4b58"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:48.881767 kubelet[2600]: I0212 19:24:48.880994 2600 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6def14aa-e658-4f0b-8dd5-e554c3fb4b58" (UID: "6def14aa-e658-4f0b-8dd5-e554c3fb4b58"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:48.881767 kubelet[2600]: I0212 19:24:48.881272 2600 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6def14aa-e658-4f0b-8dd5-e554c3fb4b58" (UID: "6def14aa-e658-4f0b-8dd5-e554c3fb4b58"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:48.881767 kubelet[2600]: I0212 19:24:48.881297 2600 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6def14aa-e658-4f0b-8dd5-e554c3fb4b58" (UID: "6def14aa-e658-4f0b-8dd5-e554c3fb4b58"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:48.881880 kubelet[2600]: I0212 19:24:48.881312 2600 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6def14aa-e658-4f0b-8dd5-e554c3fb4b58" (UID: "6def14aa-e658-4f0b-8dd5-e554c3fb4b58"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:48.881880 kubelet[2600]: I0212 19:24:48.881338 2600 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6def14aa-e658-4f0b-8dd5-e554c3fb4b58" (UID: "6def14aa-e658-4f0b-8dd5-e554c3fb4b58"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:48.881880 kubelet[2600]: I0212 19:24:48.881356 2600 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6def14aa-e658-4f0b-8dd5-e554c3fb4b58" (UID: "6def14aa-e658-4f0b-8dd5-e554c3fb4b58"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:48.881880 kubelet[2600]: W0212 19:24:48.881466 2600 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/6def14aa-e658-4f0b-8dd5-e554c3fb4b58/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 19:24:48.883216 kubelet[2600]: I0212 19:24:48.883147 2600 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6def14aa-e658-4f0b-8dd5-e554c3fb4b58" (UID: "6def14aa-e658-4f0b-8dd5-e554c3fb4b58"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:24:48.885871 kubelet[2600]: I0212 19:24:48.885847 2600 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6def14aa-e658-4f0b-8dd5-e554c3fb4b58" (UID: "6def14aa-e658-4f0b-8dd5-e554c3fb4b58"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:24:48.887892 kubelet[2600]: I0212 19:24:48.887849 2600 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "6def14aa-e658-4f0b-8dd5-e554c3fb4b58" (UID: "6def14aa-e658-4f0b-8dd5-e554c3fb4b58"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:24:48.888581 kubelet[2600]: I0212 19:24:48.888559 2600 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6def14aa-e658-4f0b-8dd5-e554c3fb4b58" (UID: "6def14aa-e658-4f0b-8dd5-e554c3fb4b58"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:24:48.889049 systemd[1]: var-lib-kubelet-pods-6def14aa\x2de658\x2d4f0b\x2d8dd5\x2de554c3fb4b58-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df9qls.mount: Deactivated successfully. Feb 12 19:24:48.889206 systemd[1]: var-lib-kubelet-pods-6def14aa\x2de658\x2d4f0b\x2d8dd5\x2de554c3fb4b58-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 19:24:48.892393 kubelet[2600]: I0212 19:24:48.892337 2600 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-kube-api-access-f9qls" (OuterVolumeSpecName: "kube-api-access-f9qls") pod "6def14aa-e658-4f0b-8dd5-e554c3fb4b58" (UID: "6def14aa-e658-4f0b-8dd5-e554c3fb4b58"). InnerVolumeSpecName "kube-api-access-f9qls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:24:48.889297 systemd[1]: var-lib-kubelet-pods-6def14aa\x2de658\x2d4f0b\x2d8dd5\x2de554c3fb4b58-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 19:24:48.889377 systemd[1]: var-lib-kubelet-pods-6def14aa\x2de658\x2d4f0b\x2d8dd5\x2de554c3fb4b58-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 12 19:24:48.980789 kubelet[2600]: I0212 19:24:48.980755 2600 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-xtables-lock\") on node \"ci-3510.3.2-a-7e4be4023b\" DevicePath \"\"" Feb 12 19:24:48.980789 kubelet[2600]: I0212 19:24:48.980791 2600 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-7e4be4023b\" DevicePath \"\"" Feb 12 19:24:48.980980 kubelet[2600]: I0212 19:24:48.980802 2600 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-lib-modules\") on node \"ci-3510.3.2-a-7e4be4023b\" DevicePath \"\"" Feb 12 19:24:48.980980 kubelet[2600]: I0212 19:24:48.980812 2600 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-cilium-run\") on node \"ci-3510.3.2-a-7e4be4023b\" DevicePath \"\"" Feb 12 19:24:48.980980 kubelet[2600]: I0212 19:24:48.980822 2600 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-host-proc-sys-net\") on node \"ci-3510.3.2-a-7e4be4023b\" DevicePath \"\"" Feb 12 19:24:48.980980 kubelet[2600]: I0212 19:24:48.980832 2600 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-cilium-config-path\") on node \"ci-3510.3.2-a-7e4be4023b\" DevicePath \"\"" Feb 12 19:24:48.980980 kubelet[2600]: I0212 19:24:48.980842 2600 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-f9qls\" (UniqueName: \"kubernetes.io/projected/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-kube-api-access-f9qls\") on node \"ci-3510.3.2-a-7e4be4023b\" DevicePath \"\"" Feb 12 19:24:48.980980 kubelet[2600]: I0212 19:24:48.980852 2600 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-hostproc\") on node \"ci-3510.3.2-a-7e4be4023b\" DevicePath \"\"" Feb 12 19:24:48.980980 kubelet[2600]: I0212 19:24:48.980861 2600 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-clustermesh-secrets\") on node \"ci-3510.3.2-a-7e4be4023b\" DevicePath \"\"" Feb 12 19:24:48.980980 kubelet[2600]: I0212 19:24:48.980873 2600 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-etc-cni-netd\") on node \"ci-3510.3.2-a-7e4be4023b\" DevicePath \"\"" Feb 12 19:24:48.981169 kubelet[2600]: I0212 19:24:48.980883 2600 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-cni-path\") on node \"ci-3510.3.2-a-7e4be4023b\" DevicePath \"\"" Feb 12 19:24:48.981169 kubelet[2600]: I0212 19:24:48.980893 2600 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-bpf-maps\") on node \"ci-3510.3.2-a-7e4be4023b\" DevicePath \"\"" Feb 12 19:24:48.981169 kubelet[2600]: I0212 19:24:48.980902 2600 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-cilium-ipsec-secrets\") on node \"ci-3510.3.2-a-7e4be4023b\" DevicePath \"\"" Feb 12 19:24:48.981169 kubelet[2600]: I0212 19:24:48.980912 2600 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-hubble-tls\") on node \"ci-3510.3.2-a-7e4be4023b\" DevicePath \"\"" Feb 12 19:24:48.981169 kubelet[2600]: I0212 19:24:48.980922 2600 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6def14aa-e658-4f0b-8dd5-e554c3fb4b58-cilium-cgroup\") on node \"ci-3510.3.2-a-7e4be4023b\" DevicePath \"\"" Feb 12 19:24:49.055376 sshd[4599]: Accepted publickey for core from 10.200.12.6 port 36500 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:24:49.056295 sshd[4599]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:24:49.060048 systemd-logind[1416]: New session 26 of user core. Feb 12 19:24:49.060542 systemd[1]: Started session-26.scope. Feb 12 19:24:49.640019 kubelet[2600]: I0212 19:24:49.639997 2600 scope.go:115] "RemoveContainer" containerID="e58fc0c41ebb5701365db52ab28d0c887b6f20b98fc4162954036976392fbe88" Feb 12 19:24:49.644659 env[1429]: time="2024-02-12T19:24:49.644617698Z" level=info msg="RemoveContainer for \"e58fc0c41ebb5701365db52ab28d0c887b6f20b98fc4162954036976392fbe88\"" Feb 12 19:24:49.657552 env[1429]: time="2024-02-12T19:24:49.657495780Z" level=info msg="RemoveContainer for \"e58fc0c41ebb5701365db52ab28d0c887b6f20b98fc4162954036976392fbe88\" returns successfully" Feb 12 19:24:49.672372 kubelet[2600]: I0212 19:24:49.672325 2600 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:24:49.672518 kubelet[2600]: E0212 19:24:49.672386 2600 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6def14aa-e658-4f0b-8dd5-e554c3fb4b58" containerName="mount-cgroup" Feb 12 19:24:49.672518 kubelet[2600]: I0212 19:24:49.672410 2600 memory_manager.go:346] "RemoveStaleState removing state" podUID="6def14aa-e658-4f0b-8dd5-e554c3fb4b58" containerName="mount-cgroup" Feb 12 19:24:49.785236 kubelet[2600]: I0212 19:24:49.785200 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqb8t\" (UniqueName: \"kubernetes.io/projected/b0b70784-4e93-4c9e-8356-f335f9be52d7-kube-api-access-wqb8t\") pod \"cilium-h6pcq\" (UID: \"b0b70784-4e93-4c9e-8356-f335f9be52d7\") " pod="kube-system/cilium-h6pcq" Feb 12 19:24:49.785368 kubelet[2600]: I0212 19:24:49.785247 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b0b70784-4e93-4c9e-8356-f335f9be52d7-cni-path\") pod \"cilium-h6pcq\" (UID: \"b0b70784-4e93-4c9e-8356-f335f9be52d7\") " pod="kube-system/cilium-h6pcq" Feb 12 19:24:49.785368 kubelet[2600]: I0212 19:24:49.785304 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0b70784-4e93-4c9e-8356-f335f9be52d7-lib-modules\") pod \"cilium-h6pcq\" (UID: \"b0b70784-4e93-4c9e-8356-f335f9be52d7\") " pod="kube-system/cilium-h6pcq" Feb 12 19:24:49.785368 kubelet[2600]: I0212 19:24:49.785328 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b0b70784-4e93-4c9e-8356-f335f9be52d7-cilium-ipsec-secrets\") pod \"cilium-h6pcq\" (UID: \"b0b70784-4e93-4c9e-8356-f335f9be52d7\") " pod="kube-system/cilium-h6pcq" Feb 12 19:24:49.785368 kubelet[2600]: I0212 19:24:49.785359 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b0b70784-4e93-4c9e-8356-f335f9be52d7-host-proc-sys-kernel\") pod \"cilium-h6pcq\" (UID: \"b0b70784-4e93-4c9e-8356-f335f9be52d7\") " pod="kube-system/cilium-h6pcq" Feb 12 19:24:49.785496 kubelet[2600]: I0212 19:24:49.785382 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b0b70784-4e93-4c9e-8356-f335f9be52d7-bpf-maps\") pod \"cilium-h6pcq\" (UID: \"b0b70784-4e93-4c9e-8356-f335f9be52d7\") " pod="kube-system/cilium-h6pcq" Feb 12 19:24:49.785496 kubelet[2600]: I0212 19:24:49.785410 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b0b70784-4e93-4c9e-8356-f335f9be52d7-clustermesh-secrets\") pod \"cilium-h6pcq\" (UID: \"b0b70784-4e93-4c9e-8356-f335f9be52d7\") " pod="kube-system/cilium-h6pcq" Feb 12 19:24:49.785496 kubelet[2600]: I0212 19:24:49.785442 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b0b70784-4e93-4c9e-8356-f335f9be52d7-cilium-run\") pod \"cilium-h6pcq\" (UID: \"b0b70784-4e93-4c9e-8356-f335f9be52d7\") " pod="kube-system/cilium-h6pcq" Feb 12 19:24:49.785496 kubelet[2600]: I0212 19:24:49.785463 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b0b70784-4e93-4c9e-8356-f335f9be52d7-hostproc\") pod \"cilium-h6pcq\" (UID: \"b0b70784-4e93-4c9e-8356-f335f9be52d7\") " pod="kube-system/cilium-h6pcq" Feb 12 19:24:49.785496 kubelet[2600]: I0212 19:24:49.785486 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b0b70784-4e93-4c9e-8356-f335f9be52d7-etc-cni-netd\") pod \"cilium-h6pcq\" (UID: \"b0b70784-4e93-4c9e-8356-f335f9be52d7\") " pod="kube-system/cilium-h6pcq" Feb 12 19:24:49.785606 kubelet[2600]: I0212 19:24:49.785520 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b0b70784-4e93-4c9e-8356-f335f9be52d7-host-proc-sys-net\") pod \"cilium-h6pcq\" (UID: \"b0b70784-4e93-4c9e-8356-f335f9be52d7\") " pod="kube-system/cilium-h6pcq" Feb 12 19:24:49.785606 kubelet[2600]: I0212 19:24:49.785541 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b0b70784-4e93-4c9e-8356-f335f9be52d7-hubble-tls\") pod \"cilium-h6pcq\" (UID: \"b0b70784-4e93-4c9e-8356-f335f9be52d7\") " pod="kube-system/cilium-h6pcq" Feb 12 19:24:49.785606 kubelet[2600]: I0212 19:24:49.785561 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0b70784-4e93-4c9e-8356-f335f9be52d7-xtables-lock\") pod \"cilium-h6pcq\" (UID: \"b0b70784-4e93-4c9e-8356-f335f9be52d7\") " pod="kube-system/cilium-h6pcq" Feb 12 19:24:49.785606 kubelet[2600]: I0212 19:24:49.785582 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b0b70784-4e93-4c9e-8356-f335f9be52d7-cilium-cgroup\") pod \"cilium-h6pcq\" (UID: \"b0b70784-4e93-4c9e-8356-f335f9be52d7\") " pod="kube-system/cilium-h6pcq" Feb 12 19:24:49.785694 kubelet[2600]: I0212 19:24:49.785611 2600 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b0b70784-4e93-4c9e-8356-f335f9be52d7-cilium-config-path\") pod \"cilium-h6pcq\" (UID: \"b0b70784-4e93-4c9e-8356-f335f9be52d7\") " pod="kube-system/cilium-h6pcq" Feb 12 19:24:49.975977 env[1429]: time="2024-02-12T19:24:49.975812655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h6pcq,Uid:b0b70784-4e93-4c9e-8356-f335f9be52d7,Namespace:kube-system,Attempt:0,}" Feb 12 19:24:50.010172 env[1429]: time="2024-02-12T19:24:50.010103002Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:24:50.010172 env[1429]: time="2024-02-12T19:24:50.010142882Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:24:50.010373 env[1429]: time="2024-02-12T19:24:50.010153202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:24:50.010516 env[1429]: time="2024-02-12T19:24:50.010484478Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5bf6db6e06cd3417d5438bb5b1d062783f3fe30bd0a5ab9dbb79f839f55c9d6e pid=4656 runtime=io.containerd.runc.v2 Feb 12 19:24:50.042690 env[1429]: time="2024-02-12T19:24:50.042643506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h6pcq,Uid:b0b70784-4e93-4c9e-8356-f335f9be52d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"5bf6db6e06cd3417d5438bb5b1d062783f3fe30bd0a5ab9dbb79f839f55c9d6e\"" Feb 12 19:24:50.045875 env[1429]: time="2024-02-12T19:24:50.045546633Z" level=info msg="CreateContainer within sandbox \"5bf6db6e06cd3417d5438bb5b1d062783f3fe30bd0a5ab9dbb79f839f55c9d6e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:24:50.073425 env[1429]: time="2024-02-12T19:24:50.073373311Z" level=info msg="CreateContainer within sandbox \"5bf6db6e06cd3417d5438bb5b1d062783f3fe30bd0a5ab9dbb79f839f55c9d6e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0eb586e60da6675dcdaee88b30eef59051671f7445ef9c8d4091384d390cc617\"" Feb 12 19:24:50.074097 env[1429]: time="2024-02-12T19:24:50.073896745Z" level=info msg="StartContainer for \"0eb586e60da6675dcdaee88b30eef59051671f7445ef9c8d4091384d390cc617\"" Feb 12 19:24:50.121280 env[1429]: time="2024-02-12T19:24:50.121228918Z" level=info msg="StartContainer for \"0eb586e60da6675dcdaee88b30eef59051671f7445ef9c8d4091384d390cc617\" returns successfully" Feb 12 19:24:50.147135 kubelet[2600]: I0212 19:24:50.146885 2600 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=6def14aa-e658-4f0b-8dd5-e554c3fb4b58 path="/var/lib/kubelet/pods/6def14aa-e658-4f0b-8dd5-e554c3fb4b58/volumes" Feb 12 19:24:50.181515 env[1429]: time="2024-02-12T19:24:50.181472461Z" level=info msg="shim disconnected" id=0eb586e60da6675dcdaee88b30eef59051671f7445ef9c8d4091384d390cc617 Feb 12 19:24:50.181789 env[1429]: time="2024-02-12T19:24:50.181762618Z" level=warning msg="cleaning up after shim disconnected" id=0eb586e60da6675dcdaee88b30eef59051671f7445ef9c8d4091384d390cc617 namespace=k8s.io Feb 12 19:24:50.181863 env[1429]: time="2024-02-12T19:24:50.181850417Z" level=info msg="cleaning up dead shim" Feb 12 19:24:50.189384 env[1429]: time="2024-02-12T19:24:50.189346970Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:24:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4740 runtime=io.containerd.runc.v2\n" Feb 12 19:24:50.203533 kubelet[2600]: E0212 19:24:50.203477 2600 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 19:24:50.647889 env[1429]: time="2024-02-12T19:24:50.647850229Z" level=info msg="CreateContainer within sandbox \"5bf6db6e06cd3417d5438bb5b1d062783f3fe30bd0a5ab9dbb79f839f55c9d6e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 19:24:50.679421 env[1429]: time="2024-02-12T19:24:50.679373744Z" level=info msg="CreateContainer within sandbox \"5bf6db6e06cd3417d5438bb5b1d062783f3fe30bd0a5ab9dbb79f839f55c9d6e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"16a3a695846cf7ab52bfd0607e092fab9ba74e630519a0d6f067f549818d5cfd\"" Feb 12 19:24:50.680249 env[1429]: time="2024-02-12T19:24:50.680223695Z" level=info msg="StartContainer for \"16a3a695846cf7ab52bfd0607e092fab9ba74e630519a0d6f067f549818d5cfd\"" Feb 12 19:24:50.784541 env[1429]: time="2024-02-12T19:24:50.784494369Z" level=info msg="StartContainer for \"16a3a695846cf7ab52bfd0607e092fab9ba74e630519a0d6f067f549818d5cfd\" returns successfully" Feb 12 19:24:50.819215 env[1429]: time="2024-02-12T19:24:50.819142648Z" level=info msg="shim disconnected" id=16a3a695846cf7ab52bfd0607e092fab9ba74e630519a0d6f067f549818d5cfd Feb 12 19:24:50.819215 env[1429]: time="2024-02-12T19:24:50.819211048Z" level=warning msg="cleaning up after shim disconnected" id=16a3a695846cf7ab52bfd0607e092fab9ba74e630519a0d6f067f549818d5cfd namespace=k8s.io Feb 12 19:24:50.819215 env[1429]: time="2024-02-12T19:24:50.819222928Z" level=info msg="cleaning up dead shim" Feb 12 19:24:50.826454 env[1429]: time="2024-02-12T19:24:50.826411884Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:24:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4801 runtime=io.containerd.runc.v2\n" Feb 12 19:24:51.653483 env[1429]: time="2024-02-12T19:24:51.653436416Z" level=info msg="CreateContainer within sandbox \"5bf6db6e06cd3417d5438bb5b1d062783f3fe30bd0a5ab9dbb79f839f55c9d6e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 19:24:51.673449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount703356157.mount: Deactivated successfully. Feb 12 19:24:51.681543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2137130115.mount: Deactivated successfully. Feb 12 19:24:51.691627 env[1429]: time="2024-02-12T19:24:51.691541960Z" level=info msg="CreateContainer within sandbox \"5bf6db6e06cd3417d5438bb5b1d062783f3fe30bd0a5ab9dbb79f839f55c9d6e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dfe2561648896025119a6da4df7a3bdfaed3b79cd4f9566bba6628ad6bb91cfb\"" Feb 12 19:24:51.692353 env[1429]: time="2024-02-12T19:24:51.692307551Z" level=info msg="StartContainer for \"dfe2561648896025119a6da4df7a3bdfaed3b79cd4f9566bba6628ad6bb91cfb\"" Feb 12 19:24:51.754617 env[1429]: time="2024-02-12T19:24:51.754563631Z" level=info msg="StartContainer for \"dfe2561648896025119a6da4df7a3bdfaed3b79cd4f9566bba6628ad6bb91cfb\" returns successfully" Feb 12 19:24:51.799881 env[1429]: time="2024-02-12T19:24:51.799830657Z" level=info msg="shim disconnected" id=dfe2561648896025119a6da4df7a3bdfaed3b79cd4f9566bba6628ad6bb91cfb Feb 12 19:24:51.799881 env[1429]: time="2024-02-12T19:24:51.799878536Z" level=warning msg="cleaning up after shim disconnected" id=dfe2561648896025119a6da4df7a3bdfaed3b79cd4f9566bba6628ad6bb91cfb namespace=k8s.io Feb 12 19:24:51.799881 env[1429]: time="2024-02-12T19:24:51.799889216Z" level=info msg="cleaning up dead shim" Feb 12 19:24:51.807864 env[1429]: time="2024-02-12T19:24:51.807809449Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:24:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4860 runtime=io.containerd.runc.v2\n" Feb 12 19:24:52.657417 env[1429]: time="2024-02-12T19:24:52.657376617Z" level=info msg="CreateContainer within sandbox \"5bf6db6e06cd3417d5438bb5b1d062783f3fe30bd0a5ab9dbb79f839f55c9d6e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 19:24:52.690992 env[1429]: time="2024-02-12T19:24:52.690928232Z" level=info msg="CreateContainer within sandbox \"5bf6db6e06cd3417d5438bb5b1d062783f3fe30bd0a5ab9dbb79f839f55c9d6e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a6672db41277d08bafcd2ca4c30dee7683940d002a4efd8efc8d38a81d287a20\"" Feb 12 19:24:52.691721 env[1429]: time="2024-02-12T19:24:52.691696864Z" level=info msg="StartContainer for \"a6672db41277d08bafcd2ca4c30dee7683940d002a4efd8efc8d38a81d287a20\"" Feb 12 19:24:52.743857 env[1429]: time="2024-02-12T19:24:52.743803927Z" level=info msg="StartContainer for \"a6672db41277d08bafcd2ca4c30dee7683940d002a4efd8efc8d38a81d287a20\" returns successfully" Feb 12 19:24:52.768730 env[1429]: time="2024-02-12T19:24:52.768683551Z" level=info msg="shim disconnected" id=a6672db41277d08bafcd2ca4c30dee7683940d002a4efd8efc8d38a81d287a20 Feb 12 19:24:52.768967 env[1429]: time="2024-02-12T19:24:52.768949028Z" level=warning msg="cleaning up after shim disconnected" id=a6672db41277d08bafcd2ca4c30dee7683940d002a4efd8efc8d38a81d287a20 namespace=k8s.io Feb 12 19:24:52.769053 env[1429]: time="2024-02-12T19:24:52.769037507Z" level=info msg="cleaning up dead shim" Feb 12 19:24:52.775540 env[1429]: time="2024-02-12T19:24:52.775499280Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:24:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4915 runtime=io.containerd.runc.v2\n" Feb 12 19:24:52.893931 systemd[1]: run-containerd-runc-k8s.io-a6672db41277d08bafcd2ca4c30dee7683940d002a4efd8efc8d38a81d287a20-runc.LvW90i.mount: Deactivated successfully. Feb 12 19:24:52.894073 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a6672db41277d08bafcd2ca4c30dee7683940d002a4efd8efc8d38a81d287a20-rootfs.mount: Deactivated successfully. Feb 12 19:24:53.661433 env[1429]: time="2024-02-12T19:24:53.661394361Z" level=info msg="CreateContainer within sandbox \"5bf6db6e06cd3417d5438bb5b1d062783f3fe30bd0a5ab9dbb79f839f55c9d6e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 19:24:53.688957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2237990973.mount: Deactivated successfully. Feb 12 19:24:53.702533 env[1429]: time="2024-02-12T19:24:53.702486963Z" level=info msg="CreateContainer within sandbox \"5bf6db6e06cd3417d5438bb5b1d062783f3fe30bd0a5ab9dbb79f839f55c9d6e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"adc0acd6987d0cffdd181b4fabc5363a3c190292b4c26b51070ddb24ecec79e0\"" Feb 12 19:24:53.703391 env[1429]: time="2024-02-12T19:24:53.703363435Z" level=info msg="StartContainer for \"adc0acd6987d0cffdd181b4fabc5363a3c190292b4c26b51070ddb24ecec79e0\"" Feb 12 19:24:53.766712 env[1429]: time="2024-02-12T19:24:53.766657942Z" level=info msg="StartContainer for \"adc0acd6987d0cffdd181b4fabc5363a3c190292b4c26b51070ddb24ecec79e0\" returns successfully" Feb 12 19:24:54.147203 kubelet[2600]: E0212 19:24:54.146364 2600 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-v9pdf" podUID=b97b5fde-8b08-4a27-bd3c-b1f3ba1747b5 Feb 12 19:24:54.148207 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Feb 12 19:24:54.677277 kubelet[2600]: I0212 19:24:54.677230 2600 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-h6pcq" podStartSLOduration=5.677174612 pod.CreationTimestamp="2024-02-12 19:24:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:24:54.676373899 +0000 UTC m=+254.825008269" watchObservedRunningTime="2024-02-12 19:24:54.677174612 +0000 UTC m=+254.825808982" Feb 12 19:24:55.508725 systemd[1]: run-containerd-runc-k8s.io-adc0acd6987d0cffdd181b4fabc5363a3c190292b4c26b51070ddb24ecec79e0-runc.CKsbvw.mount: Deactivated successfully. Feb 12 19:24:56.756301 systemd-networkd[1604]: lxc_health: Link UP Feb 12 19:24:56.771863 systemd-networkd[1604]: lxc_health: Gained carrier Feb 12 19:24:56.772217 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 19:24:57.658682 systemd[1]: run-containerd-runc-k8s.io-adc0acd6987d0cffdd181b4fabc5363a3c190292b4c26b51070ddb24ecec79e0-runc.IjGjid.mount: Deactivated successfully. Feb 12 19:24:58.477375 systemd-networkd[1604]: lxc_health: Gained IPv6LL Feb 12 19:24:59.827520 systemd[1]: run-containerd-runc-k8s.io-adc0acd6987d0cffdd181b4fabc5363a3c190292b4c26b51070ddb24ecec79e0-runc.1zrYOa.mount: Deactivated successfully. Feb 12 19:25:01.954172 systemd[1]: run-containerd-runc-k8s.io-adc0acd6987d0cffdd181b4fabc5363a3c190292b4c26b51070ddb24ecec79e0-runc.iuT21C.mount: Deactivated successfully. Feb 12 19:25:04.085920 systemd[1]: run-containerd-runc-k8s.io-adc0acd6987d0cffdd181b4fabc5363a3c190292b4c26b51070ddb24ecec79e0-runc.2gDNDt.mount: Deactivated successfully. Feb 12 19:25:04.132534 kubelet[2600]: E0212 19:25:04.132450 2600 upgradeaware.go:426] Error proxying data from client to backend: readfrom tcp 127.0.0.1:60918->127.0.0.1:44587: write tcp 127.0.0.1:60918->127.0.0.1:44587: write: broken pipe Feb 12 19:25:04.219509 sshd[4599]: pam_unix(sshd:session): session closed for user core Feb 12 19:25:04.221848 systemd[1]: sshd@23-10.200.20.34:22-10.200.12.6:36500.service: Deactivated successfully. Feb 12 19:25:04.222706 systemd[1]: session-26.scope: Deactivated successfully. Feb 12 19:25:04.223987 systemd-logind[1416]: Session 26 logged out. Waiting for processes to exit. Feb 12 19:25:04.224950 systemd-logind[1416]: Removed session 26. Feb 12 19:25:18.705620 kubelet[2600]: E0212 19:25:18.705494 2600 controller.go:189] failed to update lease, error: rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.34:38622->10.200.20.27:2379: read: connection timed out Feb 12 19:25:18.732630 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6022826fe8999c4b5a4e8a13befc417e2ada333a6ab60422a21b230a04a9cb20-rootfs.mount: Deactivated successfully. Feb 12 19:25:18.799695 env[1429]: time="2024-02-12T19:25:18.799642635Z" level=info msg="shim disconnected" id=6022826fe8999c4b5a4e8a13befc417e2ada333a6ab60422a21b230a04a9cb20 Feb 12 19:25:18.799695 env[1429]: time="2024-02-12T19:25:18.799691355Z" level=warning msg="cleaning up after shim disconnected" id=6022826fe8999c4b5a4e8a13befc417e2ada333a6ab60422a21b230a04a9cb20 namespace=k8s.io Feb 12 19:25:18.799695 env[1429]: time="2024-02-12T19:25:18.799701555Z" level=info msg="cleaning up dead shim" Feb 12 19:25:18.807811 env[1429]: time="2024-02-12T19:25:18.807765499Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:25:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5611 runtime=io.containerd.runc.v2\n" Feb 12 19:25:19.575088 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c5785ca684baf005fd71c68c403c4b4235e62e9f0dc853af3b49e2f43cff789-rootfs.mount: Deactivated successfully. Feb 12 19:25:19.587214 env[1429]: time="2024-02-12T19:25:19.586710103Z" level=info msg="shim disconnected" id=0c5785ca684baf005fd71c68c403c4b4235e62e9f0dc853af3b49e2f43cff789 Feb 12 19:25:19.587214 env[1429]: time="2024-02-12T19:25:19.586765343Z" level=warning msg="cleaning up after shim disconnected" id=0c5785ca684baf005fd71c68c403c4b4235e62e9f0dc853af3b49e2f43cff789 namespace=k8s.io Feb 12 19:25:19.587214 env[1429]: time="2024-02-12T19:25:19.586774423Z" level=info msg="cleaning up dead shim" Feb 12 19:25:19.593988 env[1429]: time="2024-02-12T19:25:19.593942047Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:25:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5637 runtime=io.containerd.runc.v2\n" Feb 12 19:25:19.708882 kubelet[2600]: I0212 19:25:19.708446 2600 scope.go:115] "RemoveContainer" containerID="0c5785ca684baf005fd71c68c403c4b4235e62e9f0dc853af3b49e2f43cff789" Feb 12 19:25:19.712300 env[1429]: time="2024-02-12T19:25:19.712251483Z" level=info msg="CreateContainer within sandbox \"6681269274adf3d150f241a2bb7bfefd4b47376597ec37e5dd1cebc2a6589438\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 12 19:25:19.712898 kubelet[2600]: I0212 19:25:19.712881 2600 scope.go:115] "RemoveContainer" containerID="6022826fe8999c4b5a4e8a13befc417e2ada333a6ab60422a21b230a04a9cb20" Feb 12 19:25:19.714996 env[1429]: time="2024-02-12T19:25:19.714961732Z" level=info msg="CreateContainer within sandbox \"5792cd430cc44b01f51a6bf3c7f1eeac27d266370a799ea51b0fc0e6f28d3ba1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 12 19:25:19.738886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4052134960.mount: Deactivated successfully. Feb 12 19:25:19.747066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2573744907.mount: Deactivated successfully. Feb 12 19:25:19.764082 env[1429]: time="2024-02-12T19:25:19.764036776Z" level=info msg="CreateContainer within sandbox \"6681269274adf3d150f241a2bb7bfefd4b47376597ec37e5dd1cebc2a6589438\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"e570210a4641824a45ea418bc23bb9db084721b4a30a82037d44f918091c2872\"" Feb 12 19:25:19.764726 env[1429]: time="2024-02-12T19:25:19.764703378Z" level=info msg="StartContainer for \"e570210a4641824a45ea418bc23bb9db084721b4a30a82037d44f918091c2872\"" Feb 12 19:25:19.773435 env[1429]: time="2024-02-12T19:25:19.773390127Z" level=info msg="CreateContainer within sandbox \"5792cd430cc44b01f51a6bf3c7f1eeac27d266370a799ea51b0fc0e6f28d3ba1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"6edaa885f98359b9fe6a41789eab8ccbff3d12816d2f52b9e40d3bce3e7d1d2d\"" Feb 12 19:25:19.773953 env[1429]: time="2024-02-12T19:25:19.773930609Z" level=info msg="StartContainer for \"6edaa885f98359b9fe6a41789eab8ccbff3d12816d2f52b9e40d3bce3e7d1d2d\"" Feb 12 19:25:19.863362 env[1429]: time="2024-02-12T19:25:19.863224827Z" level=info msg="StartContainer for \"6edaa885f98359b9fe6a41789eab8ccbff3d12816d2f52b9e40d3bce3e7d1d2d\" returns successfully" Feb 12 19:25:19.864437 env[1429]: time="2024-02-12T19:25:19.864409071Z" level=info msg="StartContainer for \"e570210a4641824a45ea418bc23bb9db084721b4a30a82037d44f918091c2872\" returns successfully" Feb 12 19:25:20.740345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1611712232.mount: Deactivated successfully. Feb 12 19:25:22.862398 kubelet[2600]: E0212 19:25:22.862258 2600 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-ci-3510.3.2-a-7e4be4023b.17b3340c9a0b459b", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-ci-3510.3.2-a-7e4be4023b", UID:"2a241dcf89a80f125870539e0b789f93", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-7e4be4023b"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 25, 12, 432231835, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 25, 12, 432231835, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.34:38414->10.200.20.27:2379: read: connection timed out' (will not retry!) Feb 12 19:25:28.706543 kubelet[2600]: E0212 19:25:28.706499 2600 controller.go:189] failed to update lease, error: Put "https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-7e4be4023b?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 12 19:25:38.707530 kubelet[2600]: E0212 19:25:38.707498 2600 request.go:1075] Unexpected error when reading response body: net/http: request canceled (Client.Timeout or context cancellation while reading body) Feb 12 19:25:38.707943 kubelet[2600]: E0212 19:25:38.707928 2600 controller.go:189] failed to update lease, error: unexpected error when reading response body. Please retry. Original error: net/http: request canceled (Client.Timeout or context cancellation while reading body) Feb 12 19:25:39.985147 env[1429]: time="2024-02-12T19:25:39.985104641Z" level=info msg="StopPodSandbox for \"053139a9f850f33fbf5803ee3b0d37c89a1b85de62eb9d5b81406f3710348dea\"" Feb 12 19:25:39.985512 env[1429]: time="2024-02-12T19:25:39.985225083Z" level=info msg="TearDown network for sandbox \"053139a9f850f33fbf5803ee3b0d37c89a1b85de62eb9d5b81406f3710348dea\" successfully" Feb 12 19:25:39.985512 env[1429]: time="2024-02-12T19:25:39.985259283Z" level=info msg="StopPodSandbox for \"053139a9f850f33fbf5803ee3b0d37c89a1b85de62eb9d5b81406f3710348dea\" returns successfully" Feb 12 19:25:39.985984 env[1429]: time="2024-02-12T19:25:39.985957250Z" level=info msg="RemovePodSandbox for \"053139a9f850f33fbf5803ee3b0d37c89a1b85de62eb9d5b81406f3710348dea\"" Feb 12 19:25:39.986045 env[1429]: time="2024-02-12T19:25:39.985999251Z" level=info msg="Forcibly stopping sandbox \"053139a9f850f33fbf5803ee3b0d37c89a1b85de62eb9d5b81406f3710348dea\"" Feb 12 19:25:39.986095 env[1429]: time="2024-02-12T19:25:39.986063371Z" level=info msg="TearDown network for sandbox \"053139a9f850f33fbf5803ee3b0d37c89a1b85de62eb9d5b81406f3710348dea\" successfully" Feb 12 19:25:40.000201 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#261 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.000469 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.019428 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#262 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.019748 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.028706 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#263 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.054109 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#263 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.054379 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.064226 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#262 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.071812 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.080479 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#261 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.105244 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#261 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.105497 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.113535 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#262 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.122270 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.130894 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#263 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.146726 kubelet[2600]: W0212 19:25:40.146703 2600 machine.go:65] Cannot read vendor id correctly, set empty. Feb 12 19:25:40.166692 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#263 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.166925 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.167044 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#262 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.175497 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.194330 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#261 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.212405 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#261 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.212640 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.221863 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#262 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.230962 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.239862 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#263 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.265635 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#263 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.265918 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.274496 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#262 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.283639 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.292480 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#261 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.320000 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#263 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.320317 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.320428 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#262 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.328919 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.348205 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#261 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.348440 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#263 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.366043 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.366252 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#262 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.374756 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.393035 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#261 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.393316 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#263 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.411090 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.411352 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#262 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.420334 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.438627 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#261 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.438859 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#263 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.457191 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.457472 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#262 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.466207 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.483983 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#261 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.484214 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#263 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.502522 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.502799 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#262 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.511143 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.528955 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#261 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.529283 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#263 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.547698 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.547988 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#262 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.557496 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.575161 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#261 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.575386 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#263 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.593520 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.593781 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#262 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.602561 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.611609 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#261 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.629537 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#263 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.629762 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.638617 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#262 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.647533 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.656527 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#261 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.675105 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#263 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.675337 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.684548 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#262 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.693904 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.713240 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#261 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.713475 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#263 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.732844 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.733098 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#262 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.752373 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.752650 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#261 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.771598 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#263 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.771868 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.790301 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#262 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.790550 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.799425 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#261 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.818579 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#263 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.818824 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.828272 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#262 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.838121 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.847641 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#261 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.876338 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#263 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.876652 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.876764 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#262 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.886064 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.895296 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#261 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.914084 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#263 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.914325 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.923233 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#262 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.943445 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.943768 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#261 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.960701 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#263 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.960933 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.970043 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#262 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.979360 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.999732 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#261 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:40.999944 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#263 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:41.020172 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:41.020367 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#262 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:41.039124 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:41.039455 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#261 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:41.057844 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#263 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:25:41.058078 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001